=== RUN TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (53.521276816s)
version_upgrade_test.go:234: (dbg) Run: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220701225105-10066
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220701225105-10066: (1.451591108s)
version_upgrade_test.go:239: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-20220701225105-10066 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220701225105-10066 status --format={{.Host}}: exit status 7 (143.53982ms)
-- stdout --
Stopped
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 109 (8m27.238269737s)
-- stdout --
* [kubernetes-upgrade-20220701225105-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=14483
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on existing profile
* Starting control plane node kubernetes-upgrade-20220701225105-10066 in cluster kubernetes-upgrade-20220701225105-10066
* Pulling base image ...
* Restarting existing docker container for "kubernetes-upgrade-20220701225105-10066" ...
* Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
- kubelet.cni-conf-dir=/etc/cni/net.mk
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
-- /stdout --
** stderr **
I0701 22:52:00.949558 160696 out.go:296] Setting OutFile to fd 1 ...
I0701 22:52:00.949689 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:52:00.949695 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:52:00.949702 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:52:00.950239 160696 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
I0701 22:52:00.950529 160696 out.go:303] Setting JSON to false
I0701 22:52:00.975830 160696 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2074,"bootTime":1656713847,"procs":556,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0701 22:52:00.975934 160696 start.go:125] virtualization: kvm guest
I0701 22:52:00.978967 160696 out.go:177] * [kubernetes-upgrade-20220701225105-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0701 22:52:00.981051 160696 out.go:177] - MINIKUBE_LOCATION=14483
I0701 22:52:00.980969 160696 notify.go:193] Checking for updates...
I0701 22:52:00.986459 160696 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 22:52:00.988066 160696 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:52:00.989522 160696 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
I0701 22:52:00.990790 160696 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0701 22:52:00.992493 160696 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225105-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0701 22:52:00.992908 160696 driver.go:360] Setting default libvirt URI to qemu:///system
I0701 22:52:01.050438 160696 docker.go:137] docker version: linux-20.10.17
I0701 22:52:01.050581 160696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:52:01.231659 160696 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:86 SystemTime:2022-07-01 22:52:01.093554422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:52:01.231800 160696 docker.go:254] overlay module found
I0701 22:52:01.234069 160696 out.go:177] * Using the docker driver based on existing profile
I0701 22:52:01.235709 160696 start.go:284] selected driver: docker
I0701 22:52:01.235725 160696 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220701225105-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220701225105-
10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:52:01.235870 160696 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 22:52:01.236992 160696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:52:01.407033 160696 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:86 SystemTime:2022-07-01 22:52:01.282017434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:52:01.407274 160696 cni.go:95] Creating CNI manager for ""
I0701 22:52:01.407290 160696 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0701 22:52:01.407306 160696 start_flags.go:310] config:
{Name:kubernetes-upgrade-20220701225105-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220701225105-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:52:01.410268 160696 out.go:177] * Starting control plane node kubernetes-upgrade-20220701225105-10066 in cluster kubernetes-upgrade-20220701225105-10066
I0701 22:52:01.411526 160696 cache.go:120] Beginning downloading kic base image for docker with containerd
I0701 22:52:01.412671 160696 out.go:177] * Pulling base image ...
I0701 22:52:01.413680 160696 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
I0701 22:52:01.413721 160696 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
I0701 22:52:01.413734 160696 cache.go:57] Caching tarball of preloaded images
I0701 22:52:01.413783 160696 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
I0701 22:52:01.413962 160696 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 22:52:01.413980 160696 cache.go:60] Finished verifying existence of preloaded tar for v1.24.2 on containerd
I0701 22:52:01.414097 160696 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/config.json ...
I0701 22:52:01.469670 160696 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
I0701 22:52:01.469726 160696 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
I0701 22:52:01.469739 160696 cache.go:208] Successfully downloaded all kic artifacts
I0701 22:52:01.469790 160696 start.go:352] acquiring machines lock for kubernetes-upgrade-20220701225105-10066: {Name:mkca4ee4e060684b1a65a01b55d7372a7dadaa9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 22:52:01.469908 160696 start.go:356] acquired machines lock for "kubernetes-upgrade-20220701225105-10066" in 90.758µs
I0701 22:52:01.469938 160696 start.go:94] Skipping create...Using existing machine configuration
I0701 22:52:01.469949 160696 fix.go:55] fixHost starting:
I0701 22:52:01.470241 160696 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225105-10066 --format={{.State.Status}}
I0701 22:52:01.505556 160696 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220701225105-10066: state=Stopped err=<nil>
W0701 22:52:01.505586 160696 fix.go:129] unexpected machine state, will restart: <nil>
I0701 22:52:01.507298 160696 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220701225105-10066" ...
I0701 22:52:01.508421 160696 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220701225105-10066
I0701 22:52:01.950093 160696 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220701225105-10066 --format={{.State.Status}}
I0701 22:52:01.988039 160696 kic.go:416] container "kubernetes-upgrade-20220701225105-10066" state is running.
I0701 22:52:01.988398 160696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225105-10066
I0701 22:52:02.025152 160696 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/config.json ...
I0701 22:52:02.025400 160696 machine.go:88] provisioning docker machine ...
I0701 22:52:02.025428 160696 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220701225105-10066"
I0701 22:52:02.025476 160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
I0701 22:52:02.065608 160696 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:02.065841 160696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49338 <nil> <nil>}
I0701 22:52:02.065866 160696 main.go:134] libmachine: About to run SSH command:
sudo hostname kubernetes-upgrade-20220701225105-10066 && echo "kubernetes-upgrade-20220701225105-10066" | sudo tee /etc/hostname
I0701 22:52:02.066480 160696 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47814->127.0.0.1:49338: read: connection reset by peer
I0701 22:52:05.200330 160696 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220701225105-10066
I0701 22:52:05.200409 160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
I0701 22:52:05.245209 160696 main.go:134] libmachine: Using SSH client type: native
I0701 22:52:05.245419 160696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49338 <nil> <nil>}
I0701 22:52:05.245457 160696 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\skubernetes-upgrade-20220701225105-10066' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220701225105-10066/g' /etc/hosts;
else
echo '127.0.1.1 kubernetes-upgrade-20220701225105-10066' | sudo tee -a /etc/hosts;
fi
fi
I0701 22:52:05.374979 160696 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0701 22:52:05.375012 160696 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
I0701 22:52:05.375048 160696 ubuntu.go:177] setting up certificates
I0701 22:52:05.375069 160696 provision.go:83] configureAuth start
I0701 22:52:05.375136 160696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225105-10066
I0701 22:52:05.431349 160696 provision.go:138] copyHostCerts
I0701 22:52:05.431422 160696 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
I0701 22:52:05.431434 160696 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
I0701 22:52:05.431511 160696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
I0701 22:52:05.431625 160696 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
I0701 22:52:05.431642 160696 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
I0701 22:52:05.431702 160696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
I0701 22:52:05.431801 160696 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
I0701 22:52:05.431809 160696 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
I0701 22:52:05.431844 160696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
I0701 22:52:05.431896 160696 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220701225105-10066 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220701225105-10066]
I0701 22:52:05.512958 160696 provision.go:172] copyRemoteCerts
I0701 22:52:05.513009 160696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 22:52:05.513046 160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
I0701 22:52:05.564725 160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
I0701 22:52:05.656435 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 22:52:05.674626 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
I0701 22:52:05.691508 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0701 22:52:05.711010 160696 provision.go:86] duration metric: configureAuth took 335.924506ms
I0701 22:52:05.711039 160696 ubuntu.go:193] setting minikube options for container-runtime
I0701 22:52:05.711243 160696 config.go:178] Loaded profile config "kubernetes-upgrade-20220701225105-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
I0701 22:52:05.711260 160696 machine.go:91] provisioned docker machine in 3.685843473s
I0701 22:52:05.711268 160696 start.go:306] post-start starting for "kubernetes-upgrade-20220701225105-10066" (driver="docker")
I0701 22:52:05.711281 160696 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 22:52:05.711328 160696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 22:52:05.711368 160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
I0701 22:52:05.746101 160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
I0701 22:52:05.834588 160696 ssh_runner.go:195] Run: cat /etc/os-release
I0701 22:52:05.837470 160696 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0701 22:52:05.837501 160696 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0701 22:52:05.837515 160696 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0701 22:52:05.837523 160696 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0701 22:52:05.837533 160696 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
I0701 22:52:05.837599 160696 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
I0701 22:52:05.837688 160696 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
I0701 22:52:05.837792 160696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 22:52:05.845188 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
I0701 22:52:05.862700 160696 start.go:309] post-start completed in 151.416144ms
I0701 22:52:05.862755 160696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0701 22:52:05.862798 160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
I0701 22:52:05.896029 160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
I0701 22:52:05.979023 160696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0701 22:52:05.982893 160696 fix.go:57] fixHost completed within 4.512942606s
I0701 22:52:05.982912 160696 start.go:81] releasing machines lock for "kubernetes-upgrade-20220701225105-10066", held for 4.512990385s
I0701 22:52:05.983019 160696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220701225105-10066
I0701 22:52:06.020989 160696 ssh_runner.go:195] Run: systemctl --version
I0701 22:52:06.021040 160696 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0701 22:52:06.021053 160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
I0701 22:52:06.021103 160696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220701225105-10066
I0701 22:52:06.076141 160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
I0701 22:52:06.076429 160696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49338 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/kubernetes-upgrade-20220701225105-10066/id_rsa Username:docker}
I0701 22:52:06.184470 160696 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0701 22:52:06.198509 160696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 22:52:06.209942 160696 docker.go:179] disabling docker service ...
I0701 22:52:06.210000 160696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0701 22:52:06.220216 160696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0701 22:52:06.231773 160696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0701 22:52:06.327570 160696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0701 22:52:06.422909 160696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0701 22:52:06.434878 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 22:52:06.450435 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I0701 22:52:06.460554 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I0701 22:52:06.470974 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I0701 22:52:06.482039 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I0701 22:52:06.492195 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
I0701 22:52:06.502060 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
I0701 22:52:06.516711 160696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0701 22:52:06.527755 160696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0701 22:52:06.534150 160696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 22:52:06.628808 160696 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0701 22:52:06.751081 160696 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
I0701 22:52:06.751155 160696 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0701 22:52:06.755655 160696 start.go:471] Will wait 60s for crictl version
I0701 22:52:06.755726 160696 ssh_runner.go:195] Run: sudo crictl version
I0701 22:52:06.807504 160696 start.go:480] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.6
RuntimeApiVersion: v1alpha2
I0701 22:52:06.807554 160696 ssh_runner.go:195] Run: containerd --version
I0701 22:52:06.842805 160696 ssh_runner.go:195] Run: containerd --version
I0701 22:52:06.880735 160696 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
I0701 22:52:06.881945 160696 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220701225105-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0701 22:52:06.917815 160696 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0701 22:52:06.921307 160696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 22:52:06.935690 160696 out.go:177] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0701 22:52:06.937480 160696 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
I0701 22:52:06.937551 160696 ssh_runner.go:195] Run: sudo crictl images --output json
I0701 22:52:06.965301 160696 containerd.go:543] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.2". assuming images are not preloaded.
I0701 22:52:06.965357 160696 ssh_runner.go:195] Run: which lz4
I0701 22:52:06.968470 160696 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0701 22:52:06.971551 160696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0701 22:52:06.971578 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (447741112 bytes)
I0701 22:52:07.937148 160696 containerd.go:490] Took 0.968708 seconds to copy over tarball
I0701 22:52:07.937211 160696 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0701 22:52:11.987915 160696 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.05068032s)
I0701 22:52:11.987948 160696 containerd.go:497] Took 4.050772 seconds t extract the tarball
I0701 22:52:11.987960 160696 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0701 22:52:12.182505 160696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 22:52:12.265856 160696 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0701 22:52:12.348706 160696 ssh_runner.go:195] Run: sudo crictl images --output json
I0701 22:52:12.376428 160696 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.24.2 k8s.gcr.io/kube-controller-manager:v1.24.2 k8s.gcr.io/kube-scheduler:v1.24.2 k8s.gcr.io/kube-proxy:v1.24.2 k8s.gcr.io/pause:3.7 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
I0701 22:52:12.376514 160696 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0701 22:52:12.376531 160696 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.24.2
I0701 22:52:12.376559 160696 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.24.2
I0701 22:52:12.376565 160696 image.go:134] retrieving image: k8s.gcr.io/etcd:3.5.3-0
I0701 22:52:12.376577 160696 image.go:134] retrieving image: k8s.gcr.io/pause:3.7
I0701 22:52:12.376589 160696 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.24.2
I0701 22:52:12.376532 160696 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.24.2
I0701 22:52:12.376727 160696 image.go:134] retrieving image: k8s.gcr.io/coredns/coredns:v1.8.6
I0701 22:52:12.378085 160696 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.24.2: Error: No such image: k8s.gcr.io/kube-apiserver:v1.24.2
I0701 22:52:12.378099 160696 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.5.3-0: Error: No such image: k8s.gcr.io/etcd:3.5.3-0
I0701 22:52:12.378110 160696 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.24.2: Error: No such image: k8s.gcr.io/kube-proxy:v1.24.2
I0701 22:52:12.378087 160696 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.24.2: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.24.2
I0701 22:52:12.378088 160696 image.go:177] daemon lookup for k8s.gcr.io/pause:3.7: Error: No such image: k8s.gcr.io/pause:3.7
I0701 22:52:12.378120 160696 image.go:177] daemon lookup for k8s.gcr.io/coredns/coredns:v1.8.6: Error: No such image: k8s.gcr.io/coredns/coredns:v1.8.6
I0701 22:52:12.378157 160696 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.24.2: Error: No such image: k8s.gcr.io/kube-scheduler:v1.24.2
I0701 22:52:12.378088 160696 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0701 22:52:12.601423 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.5.3-0"
I0701 22:52:12.601711 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.24.2"
I0701 22:52:12.601886 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.24.2"
I0701 22:52:12.603244 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.2"
I0701 22:52:12.622981 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.24.2"
I0701 22:52:12.646441 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns/coredns:v1.8.6"
I0701 22:52:12.648175 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.7"
I0701 22:52:12.691839 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0701 22:52:13.527303 160696 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.24.2" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.24.2" does not exist at hash "34cdf99b1bb3b3a62c5b4226c3bc0983ab1f13e776269d1872092091b07203df" in container runtime
I0701 22:52:13.527358 160696 cri.go:216] Removing image: k8s.gcr.io/kube-controller-manager:v1.24.2
I0701 22:52:13.527403 160696 ssh_runner.go:195] Run: which crictl
I0701 22:52:13.527487 160696 cache_images.go:116] "k8s.gcr.io/etcd:3.5.3-0" needs transfer: "k8s.gcr.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
I0701 22:52:13.527517 160696 cri.go:216] Removing image: k8s.gcr.io/etcd:3.5.3-0
I0701 22:52:13.527549 160696 ssh_runner.go:195] Run: which crictl
I0701 22:52:13.527664 160696 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.24.2" needs transfer: "k8s.gcr.io/kube-proxy:v1.24.2" does not exist at hash "a634548d10b032c2a1d704ef9a2ab04c12b0574afe67ee192b196a7f12da9536" in container runtime
I0701 22:52:13.527706 160696 cri.go:216] Removing image: k8s.gcr.io/kube-proxy:v1.24.2
I0701 22:52:13.527738 160696 ssh_runner.go:195] Run: which crictl
I0701 22:52:13.531394 160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.24.2
I0701 22:52:13.531455 160696 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.24.2" needs transfer: "k8s.gcr.io/kube-scheduler:v1.24.2" does not exist at hash "5d725196c1f47e72d2bc7069776d5928b1fb1e4adf09c18997733099aa3663ac" in container runtime
I0701 22:52:13.531490 160696 cri.go:216] Removing image: k8s.gcr.io/kube-scheduler:v1.24.2
I0701 22:52:13.531547 160696 ssh_runner.go:195] Run: which crictl
I0701 22:52:13.626875 160696 cache_images.go:116] "k8s.gcr.io/coredns/coredns:v1.8.6" needs transfer: "k8s.gcr.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
I0701 22:52:13.626929 160696 cri.go:216] Removing image: k8s.gcr.io/coredns/coredns:v1.8.6
I0701 22:52:13.626969 160696 ssh_runner.go:195] Run: which crictl
I0701 22:52:13.631737 160696 cache_images.go:116] "k8s.gcr.io/pause:3.7" needs transfer: "k8s.gcr.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
I0701 22:52:13.631774 160696 cri.go:216] Removing image: k8s.gcr.io/pause:3.7
I0701 22:52:13.631808 160696 ssh_runner.go:195] Run: which crictl
I0701 22:52:13.639414 160696 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.24.2": (1.036134584s)
I0701 22:52:13.639444 160696 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.24.2" needs transfer: "k8s.gcr.io/kube-apiserver:v1.24.2" does not exist at hash "d3377ffb7177cc4becce8a534d8547aca9530cb30fac9ebe479b31102f1ba503" in container runtime
I0701 22:52:13.639466 160696 cri.go:216] Removing image: k8s.gcr.io/kube-apiserver:v1.24.2
I0701 22:52:13.639504 160696 ssh_runner.go:195] Run: which crictl
I0701 22:52:13.639512 160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/etcd:3.5.3-0
I0701 22:52:13.639554 160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-proxy:v1.24.2
I0701 22:52:13.639564 160696 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0701 22:52:13.639596 160696 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0701 22:52:13.639628 160696 ssh_runner.go:195] Run: which crictl
I0701 22:52:14.500939 160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.24.2
I0701 22:52:14.500949 160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2
I0701 22:52:14.501045 160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/coredns/coredns:v1.8.6
I0701 22:52:14.501091 160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.2
I0701 22:52:14.501121 160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/pause:3.7
I0701 22:52:14.501163 160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.24.2
I0701 22:52:14.516569 160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2
I0701 22:52:14.516675 160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.2
I0701 22:52:14.520642 160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0
I0701 22:52:14.520810 160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
I0701 22:52:14.520885 160696 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0701 22:52:14.782487 160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2
I0701 22:52:14.782518 160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6
I0701 22:52:14.782621 160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.2
I0701 22:52:14.782639 160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
I0701 22:52:14.784461 160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.2: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.24.2': No such file or directory
I0701 22:52:14.784490 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 --> /var/lib/minikube/images/kube-controller-manager_v1.24.2 (31037952 bytes)
I0701 22:52:14.784594 160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7
I0701 22:52:14.784661 160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
I0701 22:52:14.784741 160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2
I0701 22:52:14.784803 160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.2
I0701 22:52:14.784894 160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.2: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.24.2': No such file or directory
I0701 22:52:14.784912 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 --> /var/lib/minikube/images/kube-proxy_v1.24.2 (39518208 bytes)
I0701 22:52:14.784991 160696 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0701 22:52:14.785057 160696 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I0701 22:52:14.785119 160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
I0701 22:52:14.785140 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
I0701 22:52:14.791533 160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
I0701 22:52:14.791534 160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.2: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.24.2': No such file or directory
I0701 22:52:14.791564 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
I0701 22:52:14.791583 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 --> /var/lib/minikube/images/kube-scheduler_v1.24.2 (15491584 bytes)
I0701 22:52:14.792513 160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/pause_3.7': No such file or directory
I0701 22:52:14.792539 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
I0701 22:52:14.792550 160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.2: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.24.2': No such file or directory
I0701 22:52:14.792586 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 --> /var/lib/minikube/images/kube-apiserver_v1.24.2 (33798144 bytes)
I0701 22:52:14.792601 160696 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I0701 22:52:14.792631 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
I0701 22:52:14.917586 160696 containerd.go:227] Loading image: /var/lib/minikube/images/pause_3.7
I0701 22:52:14.917660 160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
I0701 22:52:16.101432 160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7: (1.183745805s)
I0701 22:52:16.101474 160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.7 from cache
I0701 22:52:16.101506 160696 containerd.go:227] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0701 22:52:16.101561 160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0701 22:52:16.602015 160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0701 22:52:16.602056 160696 containerd.go:227] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.2
I0701 22:52:16.602104 160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.2
I0701 22:52:17.570921 160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.24.2 from cache
I0701 22:52:17.570969 160696 containerd.go:227] Loading image: /var/lib/minikube/images/coredns_v1.8.6
I0701 22:52:17.571024 160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
I0701 22:52:18.259818 160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 from cache
I0701 22:52:18.259867 160696 containerd.go:227] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.2
I0701 22:52:18.259912 160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.2
I0701 22:52:20.084695 160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.2: (1.824738355s)
I0701 22:52:20.084729 160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.24.2 from cache
I0701 22:52:20.084761 160696 containerd.go:227] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.2
I0701 22:52:20.084795 160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.2
I0701 22:52:25.445330 160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.2: (5.360506175s)
I0701 22:52:25.445364 160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.24.2 from cache
I0701 22:52:25.445390 160696 containerd.go:227] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.2
I0701 22:52:25.445425 160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.2
I0701 22:52:26.528902 160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.2: (1.083452525s)
I0701 22:52:26.528929 160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.24.2 from cache
I0701 22:52:26.528962 160696 containerd.go:227] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
I0701 22:52:26.528998 160696 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
I0701 22:52:30.457001 160696 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (3.927974233s)
I0701 22:52:30.457030 160696 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.3-0 from cache
I0701 22:52:30.457060 160696 cache_images.go:123] Successfully loaded all cached images
I0701 22:52:30.457066 160696 cache_images.go:92] LoadImages completed in 18.080611233s
I0701 22:52:30.457117 160696 ssh_runner.go:195] Run: sudo crictl info
I0701 22:52:30.488778 160696 cni.go:95] Creating CNI manager for ""
I0701 22:52:30.488811 160696 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0701 22:52:30.488829 160696 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0701 22:52:30.488848 160696 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220701225105-10066 NodeName:kubernetes-upgrade-20220701225105-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:
cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0701 22:52:30.489024 160696 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "kubernetes-upgrade-20220701225105-10066"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0701 22:52:30.489135 160696 kubeadm.go:961] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20220701225105-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220701225105-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0701 22:52:30.489211 160696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
I0701 22:52:30.497442 160696 binaries.go:44] Found k8s binaries, skipping transfer
I0701 22:52:30.497508 160696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0701 22:52:30.505952 160696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (563 bytes)
I0701 22:52:30.520033 160696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0701 22:52:30.533969 160696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
I0701 22:52:30.547983 160696 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0701 22:52:30.551271 160696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 22:52:30.562236 160696 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066 for IP: 192.168.76.2
I0701 22:52:30.562332 160696 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
I0701 22:52:30.562366 160696 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
I0701 22:52:30.562455 160696 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/client.key
I0701 22:52:30.562565 160696 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/apiserver.key.31bdca25
I0701 22:52:30.562627 160696 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/proxy-client.key
I0701 22:52:30.562773 160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
W0701 22:52:30.562811 160696 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
I0701 22:52:30.562829 160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
I0701 22:52:30.562864 160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
I0701 22:52:30.562897 160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
I0701 22:52:30.562930 160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
I0701 22:52:30.562977 160696 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
I0701 22:52:30.563731 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0701 22:52:30.581831 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0701 22:52:30.600082 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0701 22:52:30.617194 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0701 22:52:30.634897 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0701 22:52:30.656519 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0701 22:52:30.675784 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0701 22:52:30.694370 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0701 22:52:30.713110 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
I0701 22:52:30.730872 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
I0701 22:52:30.749516 160696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0701 22:52:30.768728 160696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0701 22:52:30.782590 160696 ssh_runner.go:195] Run: openssl version
I0701 22:52:30.788044 160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 22:52:30.795818 160696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:30.798782 160696 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 1 22:24 /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:30.798829 160696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 22:52:30.803528 160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 22:52:30.810082 160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
I0701 22:52:30.817262 160696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
I0701 22:52:30.820055 160696 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 1 22:28 /usr/share/ca-certificates/10066.pem
I0701 22:52:30.820096 160696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
I0701 22:52:30.825093 160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
I0701 22:52:30.833317 160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
I0701 22:52:30.840631 160696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
I0701 22:52:30.843783 160696 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 1 22:28 /usr/share/ca-certificates/100662.pem
I0701 22:52:30.843828 160696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
I0701 22:52:30.848889 160696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
I0701 22:52:30.855610 160696 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220701225105-10066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:kubernetes-upgrade-20220701225105-10066 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:52:30.855704 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0701 22:52:30.855736 160696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0701 22:52:30.882763 160696 cri.go:87] found id: ""
I0701 22:52:30.882831 160696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0701 22:52:30.891094 160696 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0701 22:52:30.891122 160696 kubeadm.go:626] restartCluster start
I0701 22:52:30.891170 160696 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0701 22:52:30.897805 160696 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0701 22:52:30.898636 160696 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220701225105-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:52:30.899075 160696 kubeconfig.go:127] "kubernetes-upgrade-20220701225105-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
I0701 22:52:30.899750 160696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 22:52:30.900536 160696 kapi.go:59] client config for kubernetes-upgrade-20220701225105-10066: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/kubernetes-upgrade-20220701225105-10066/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikub
e/profiles/kubernetes-upgrade-20220701225105-10066/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x173d480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 22:52:30.900979 160696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0701 22:52:30.908582 160696 kubeadm.go:593] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2022-07-01 22:51:22.101568183 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2022-07-01 22:52:30.544362035 +0000
@@ -1,4 +1,4 @@
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
@@ -17,7 +17,7 @@
node-ip: 192.168.76.2
taints: []
---
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
@@ -31,16 +31,14 @@
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
-clusterName: kubernetes-upgrade-20220701225105-10066
+clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
-dns:
- type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
-kubernetesVersion: v1.16.0
+ proxy-refresh-interval: "70000"
+kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
-- /stdout --
I0701 22:52:30.908601 160696 kubeadm.go:1092] stopping kube-system containers ...
I0701 22:52:30.908613 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0701 22:52:30.908647 160696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0701 22:52:30.933111 160696 cri.go:87] found id: ""
I0701 22:52:30.933169 160696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0701 22:52:30.943408 160696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 22:52:30.950467 160696 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5759 Jul 1 22:51 /etc/kubernetes/admin.conf
-rw------- 1 root root 5799 Jul 1 22:51 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 5959 Jul 1 22:51 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5747 Jul 1 22:51 /etc/kubernetes/scheduler.conf
I0701 22:52:30.952037 160696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0701 22:52:30.959157 160696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0701 22:52:30.966197 160696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0701 22:52:30.972868 160696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0701 22:52:30.979275 160696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0701 22:52:30.986195 160696 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0701 22:52:30.986214 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:31.037016 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:32.267951 160696 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.230899167s)
I0701 22:52:32.267987 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:32.458261 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:32.513001 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0701 22:52:32.554058 160696 api_server.go:51] waiting for apiserver process to appear ...
I0701 22:52:32.554121 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:33.063016 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:33.563243 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:34.062742 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:34.562479 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:35.062887 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:35.563075 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:36.062721 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:36.562658 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:37.062684 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:37.563067 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:38.062437 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:38.562683 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:39.062673 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:39.563181 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:40.063183 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:40.563031 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:41.062924 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:41.562700 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:42.062535 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:42.562461 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:43.062653 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:43.563295 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:44.063289 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:44.563277 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:45.063058 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:45.562428 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:46.062403 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:46.562561 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:47.062691 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:47.562598 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:48.062722 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:48.562683 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:49.063113 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:49.563245 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:50.063327 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:50.562652 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:51.063105 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:51.562659 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:52.063076 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:52.563144 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:53.063450 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:53.562566 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:54.062671 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:54.562642 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:55.062670 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:55.563272 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:56.062638 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:56.562693 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:57.062638 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:57.562669 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:58.063033 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:58.563121 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:59.062667 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:52:59.562558 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:00.063133 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:00.562727 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:01.062817 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:01.563163 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:02.062647 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:02.562665 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:03.063136 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:03.562518 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:04.063122 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:04.563400 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:05.063367 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:05.562403 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:06.063192 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:06.563153 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:07.063404 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:07.563357 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:08.063143 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:08.562678 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:09.063402 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:09.563129 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:10.063343 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:10.562588 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:11.063252 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:11.562663 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:12.062673 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:12.562659 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:13.063289 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:13.562852 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:14.063015 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:14.562637 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:15.062696 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:15.562602 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:16.062825 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:16.563310 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:17.062669 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:17.562689 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:18.062684 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:18.562486 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:19.063167 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:19.563425 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:20.062701 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:20.563489 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:21.063210 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:21.562989 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:22.062697 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:22.562696 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:23.063062 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:23.562700 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:24.062481 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:24.563411 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:25.062692 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:25.562915 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:26.062674 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:26.562725 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:27.062993 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:27.563174 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:28.062426 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:28.562522 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:29.062990 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:29.563289 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:30.062670 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:30.563081 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:31.063188 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:31.563278 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:32.062432 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:32.562471 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:53:32.562572 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:53:32.585597 160696 cri.go:87] found id: ""
I0701 22:53:32.585622 160696 logs.go:274] 0 containers: []
W0701 22:53:32.585628 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:53:32.585634 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:53:32.585683 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:53:32.607548 160696 cri.go:87] found id: ""
I0701 22:53:32.607575 160696 logs.go:274] 0 containers: []
W0701 22:53:32.607582 160696 logs.go:276] No container was found matching "etcd"
I0701 22:53:32.607588 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:53:32.607640 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:53:32.629319 160696 cri.go:87] found id: ""
I0701 22:53:32.629346 160696 logs.go:274] 0 containers: []
W0701 22:53:32.629354 160696 logs.go:276] No container was found matching "coredns"
I0701 22:53:32.629361 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:53:32.629413 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:53:32.650769 160696 cri.go:87] found id: ""
I0701 22:53:32.650794 160696 logs.go:274] 0 containers: []
W0701 22:53:32.650801 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:53:32.650810 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:53:32.650866 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:53:32.672723 160696 cri.go:87] found id: ""
I0701 22:53:32.672748 160696 logs.go:274] 0 containers: []
W0701 22:53:32.672758 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:53:32.672766 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:53:32.672817 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:53:32.695551 160696 cri.go:87] found id: ""
I0701 22:53:32.695571 160696 logs.go:274] 0 containers: []
W0701 22:53:32.695580 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:53:32.695590 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:53:32.695639 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:53:32.718224 160696 cri.go:87] found id: ""
I0701 22:53:32.718249 160696 logs.go:274] 0 containers: []
W0701 22:53:32.718257 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:53:32.718264 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:53:32.718316 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:53:32.740861 160696 cri.go:87] found id: ""
I0701 22:53:32.740887 160696 logs.go:274] 0 containers: []
W0701 22:53:32.740895 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:53:32.740904 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:53:32.740916 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:53:32.786433 160696 logs.go:138] Found kubelet problem: Jul 01 22:53:32 kubernetes-upgrade-20220701225105-10066 kubelet[2334]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:53:32.834141 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:53:32.834180 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:53:32.848164 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:53:32.848190 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:53:32.898660 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:53:32.898682 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:53:32.898694 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:53:32.935746 160696 logs.go:123] Gathering logs for container status ...
I0701 22:53:32.935776 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:53:32.960887 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:53:32.960912 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:53:32.961021 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:53:32.961035 160696 out.go:239] Jul 01 22:53:32 kubernetes-upgrade-20220701225105-10066 kubelet[2334]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:53:32 kubernetes-upgrade-20220701225105-10066 kubelet[2334]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:53:32.961039 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:53:32.961044 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:53:42.961493 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:43.063404 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:53:43.063480 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:53:43.090753 160696 cri.go:87] found id: ""
I0701 22:53:43.090778 160696 logs.go:274] 0 containers: []
W0701 22:53:43.090788 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:53:43.090796 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:53:43.090848 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:53:43.113480 160696 cri.go:87] found id: ""
I0701 22:53:43.113508 160696 logs.go:274] 0 containers: []
W0701 22:53:43.113516 160696 logs.go:276] No container was found matching "etcd"
I0701 22:53:43.113523 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:53:43.113563 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:53:43.140180 160696 cri.go:87] found id: ""
I0701 22:53:43.140219 160696 logs.go:274] 0 containers: []
W0701 22:53:43.140227 160696 logs.go:276] No container was found matching "coredns"
I0701 22:53:43.140236 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:53:43.140286 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:53:43.168190 160696 cri.go:87] found id: ""
I0701 22:53:43.168217 160696 logs.go:274] 0 containers: []
W0701 22:53:43.168226 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:53:43.168235 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:53:43.168283 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:53:43.194136 160696 cri.go:87] found id: ""
I0701 22:53:43.194160 160696 logs.go:274] 0 containers: []
W0701 22:53:43.194169 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:53:43.194176 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:53:43.194226 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:53:43.215600 160696 cri.go:87] found id: ""
I0701 22:53:43.215625 160696 logs.go:274] 0 containers: []
W0701 22:53:43.215634 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:53:43.215642 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:53:43.215715 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:53:43.242014 160696 cri.go:87] found id: ""
I0701 22:53:43.242042 160696 logs.go:274] 0 containers: []
W0701 22:53:43.242051 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:53:43.242072 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:53:43.242127 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:53:43.268163 160696 cri.go:87] found id: ""
I0701 22:53:43.268188 160696 logs.go:274] 0 containers: []
W0701 22:53:43.268196 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:53:43.268207 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:53:43.268220 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:53:43.318032 160696 logs.go:138] Found kubelet problem: Jul 01 22:53:42 kubernetes-upgrade-20220701225105-10066 kubelet[2626]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:53:43.382286 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:53:43.382321 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:53:43.397700 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:53:43.397730 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:53:43.453125 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:53:43.453152 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:53:43.453165 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:53:43.499946 160696 logs.go:123] Gathering logs for container status ...
I0701 22:53:43.499979 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:53:43.525892 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:53:43.525921 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:53:43.526035 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:53:43.526057 160696 out.go:239] Jul 01 22:53:42 kubernetes-upgrade-20220701225105-10066 kubelet[2626]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:53:42 kubernetes-upgrade-20220701225105-10066 kubelet[2626]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:53:43.526065 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:53:43.526073 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:53:53.527728 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:53:53.563164 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:53:53.563241 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:53:53.585931 160696 cri.go:87] found id: ""
I0701 22:53:53.585964 160696 logs.go:274] 0 containers: []
W0701 22:53:53.585972 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:53:53.585981 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:53:53.586045 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:53:53.611388 160696 cri.go:87] found id: ""
I0701 22:53:53.611414 160696 logs.go:274] 0 containers: []
W0701 22:53:53.611420 160696 logs.go:276] No container was found matching "etcd"
I0701 22:53:53.611425 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:53:53.611481 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:53:53.635094 160696 cri.go:87] found id: ""
I0701 22:53:53.635117 160696 logs.go:274] 0 containers: []
W0701 22:53:53.635126 160696 logs.go:276] No container was found matching "coredns"
I0701 22:53:53.635133 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:53:53.635187 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:53:53.656953 160696 cri.go:87] found id: ""
I0701 22:53:53.656978 160696 logs.go:274] 0 containers: []
W0701 22:53:53.656987 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:53:53.656994 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:53:53.657041 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:53:53.678488 160696 cri.go:87] found id: ""
I0701 22:53:53.678510 160696 logs.go:274] 0 containers: []
W0701 22:53:53.678518 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:53:53.678526 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:53:53.678601 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:53:53.699827 160696 cri.go:87] found id: ""
I0701 22:53:53.699852 160696 logs.go:274] 0 containers: []
W0701 22:53:53.699861 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:53:53.699869 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:53:53.699911 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:53:53.721604 160696 cri.go:87] found id: ""
I0701 22:53:53.721644 160696 logs.go:274] 0 containers: []
W0701 22:53:53.721654 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:53:53.721664 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:53:53.721716 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:53:53.743391 160696 cri.go:87] found id: ""
I0701 22:53:53.743409 160696 logs.go:274] 0 containers: []
W0701 22:53:53.743416 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:53:53.743423 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:53:53.743432 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:53:53.777151 160696 logs.go:123] Gathering logs for container status ...
I0701 22:53:53.777179 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:53:53.801530 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:53:53.801556 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:53:53.846297 160696 logs.go:138] Found kubelet problem: Jul 01 22:53:53 kubernetes-upgrade-20220701225105-10066 kubelet[2915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:53:53.896001 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:53:53.896031 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:53:53.909709 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:53:53.909732 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:53:53.958673 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:53:53.958699 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:53:53.958711 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:53:53.958839 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:53:53.958853 160696 out.go:239] Jul 01 22:53:53 kubernetes-upgrade-20220701225105-10066 kubelet[2915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:53:53 kubernetes-upgrade-20220701225105-10066 kubelet[2915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:53:53.958860 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:53:53.958871 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:54:03.960392 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:54:04.062620 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:54:04.062708 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:54:04.095188 160696 cri.go:87] found id: ""
I0701 22:54:04.095218 160696 logs.go:274] 0 containers: []
W0701 22:54:04.095228 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:54:04.095236 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:54:04.095289 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:54:04.123432 160696 cri.go:87] found id: ""
I0701 22:54:04.123460 160696 logs.go:274] 0 containers: []
W0701 22:54:04.123468 160696 logs.go:276] No container was found matching "etcd"
I0701 22:54:04.123476 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:54:04.123530 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:54:04.154843 160696 cri.go:87] found id: ""
I0701 22:54:04.154887 160696 logs.go:274] 0 containers: []
W0701 22:54:04.154897 160696 logs.go:276] No container was found matching "coredns"
I0701 22:54:04.154906 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:54:04.154960 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:54:04.181719 160696 cri.go:87] found id: ""
I0701 22:54:04.181740 160696 logs.go:274] 0 containers: []
W0701 22:54:04.181745 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:54:04.181751 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:54:04.181793 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:54:04.208646 160696 cri.go:87] found id: ""
I0701 22:54:04.208671 160696 logs.go:274] 0 containers: []
W0701 22:54:04.208683 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:54:04.208692 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:54:04.208746 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:54:04.241818 160696 cri.go:87] found id: ""
I0701 22:54:04.241877 160696 logs.go:274] 0 containers: []
W0701 22:54:04.241898 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:54:04.241912 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:54:04.241971 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:54:04.270953 160696 cri.go:87] found id: ""
I0701 22:54:04.270981 160696 logs.go:274] 0 containers: []
W0701 22:54:04.270989 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:54:04.270996 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:54:04.271054 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:54:04.296294 160696 cri.go:87] found id: ""
I0701 22:54:04.296319 160696 logs.go:274] 0 containers: []
W0701 22:54:04.296329 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:54:04.296341 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:54:04.296366 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:54:04.352321 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:54:04.352346 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:54:04.352362 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:54:04.396791 160696 logs.go:123] Gathering logs for container status ...
I0701 22:54:04.396831 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:54:04.424182 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:54:04.424213 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:54:04.472377 160696 logs.go:138] Found kubelet problem: Jul 01 22:54:03 kubernetes-upgrade-20220701225105-10066 kubelet[3201]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:04.517232 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:54:04.517269 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:54:04.532247 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:04.532278 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:54:04.532401 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:54:04.532418 160696 out.go:239] Jul 01 22:54:03 kubernetes-upgrade-20220701225105-10066 kubelet[3201]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:54:03 kubernetes-upgrade-20220701225105-10066 kubelet[3201]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:04.532424 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:04.532433 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:54:14.533501 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:54:14.563105 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:54:14.563185 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:54:14.590523 160696 cri.go:87] found id: ""
I0701 22:54:14.590588 160696 logs.go:274] 0 containers: []
W0701 22:54:14.590596 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:54:14.590601 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:54:14.590646 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:54:14.613185 160696 cri.go:87] found id: ""
I0701 22:54:14.613205 160696 logs.go:274] 0 containers: []
W0701 22:54:14.613213 160696 logs.go:276] No container was found matching "etcd"
I0701 22:54:14.613218 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:54:14.613256 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:54:14.640142 160696 cri.go:87] found id: ""
I0701 22:54:14.640168 160696 logs.go:274] 0 containers: []
W0701 22:54:14.640182 160696 logs.go:276] No container was found matching "coredns"
I0701 22:54:14.640190 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:54:14.640240 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:54:14.668385 160696 cri.go:87] found id: ""
I0701 22:54:14.668415 160696 logs.go:274] 0 containers: []
W0701 22:54:14.668426 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:54:14.668436 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:54:14.668501 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:54:14.692667 160696 cri.go:87] found id: ""
I0701 22:54:14.692690 160696 logs.go:274] 0 containers: []
W0701 22:54:14.692699 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:54:14.692708 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:54:14.692764 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:54:14.715534 160696 cri.go:87] found id: ""
I0701 22:54:14.715566 160696 logs.go:274] 0 containers: []
W0701 22:54:14.715574 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:54:14.715582 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:54:14.715632 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:54:14.745303 160696 cri.go:87] found id: ""
I0701 22:54:14.745329 160696 logs.go:274] 0 containers: []
W0701 22:54:14.745338 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:54:14.745346 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:54:14.745413 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:54:14.775750 160696 cri.go:87] found id: ""
I0701 22:54:14.775777 160696 logs.go:274] 0 containers: []
W0701 22:54:14.775785 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:54:14.775797 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:54:14.775811 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:54:14.839644 160696 logs.go:138] Found kubelet problem: Jul 01 22:54:14 kubernetes-upgrade-20220701225105-10066 kubelet[3487]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:14.901382 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:54:14.901413 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:54:14.916871 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:54:14.916912 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:54:14.984706 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:54:14.984736 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:54:14.984747 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:54:15.023712 160696 logs.go:123] Gathering logs for container status ...
I0701 22:54:15.023772 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:54:15.057095 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:15.057126 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:54:15.057274 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:54:15.057294 160696 out.go:239] Jul 01 22:54:14 kubernetes-upgrade-20220701225105-10066 kubelet[3487]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:54:14 kubernetes-upgrade-20220701225105-10066 kubelet[3487]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:15.057301 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:15.057313 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:54:25.058476 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:54:25.562417 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:54:25.562478 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:54:25.584954 160696 cri.go:87] found id: ""
I0701 22:54:25.584980 160696 logs.go:274] 0 containers: []
W0701 22:54:25.584990 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:54:25.584998 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:54:25.585056 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:54:25.607425 160696 cri.go:87] found id: ""
I0701 22:54:25.607454 160696 logs.go:274] 0 containers: []
W0701 22:54:25.607463 160696 logs.go:276] No container was found matching "etcd"
I0701 22:54:25.607469 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:54:25.607512 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:54:25.641054 160696 cri.go:87] found id: ""
I0701 22:54:25.641090 160696 logs.go:274] 0 containers: []
W0701 22:54:25.641115 160696 logs.go:276] No container was found matching "coredns"
I0701 22:54:25.641126 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:54:25.641188 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:54:25.677105 160696 cri.go:87] found id: ""
I0701 22:54:25.677134 160696 logs.go:274] 0 containers: []
W0701 22:54:25.677143 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:54:25.677151 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:54:25.677211 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:54:25.703887 160696 cri.go:87] found id: ""
I0701 22:54:25.703913 160696 logs.go:274] 0 containers: []
W0701 22:54:25.703922 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:54:25.703929 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:54:25.703972 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:54:25.733969 160696 cri.go:87] found id: ""
I0701 22:54:25.733999 160696 logs.go:274] 0 containers: []
W0701 22:54:25.734010 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:54:25.734019 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:54:25.734079 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:54:25.776644 160696 cri.go:87] found id: ""
I0701 22:54:25.776668 160696 logs.go:274] 0 containers: []
W0701 22:54:25.776675 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:54:25.776681 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:54:25.776732 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:54:25.800401 160696 cri.go:87] found id: ""
I0701 22:54:25.800432 160696 logs.go:274] 0 containers: []
W0701 22:54:25.800441 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:54:25.800452 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:54:25.800464 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:54:25.867632 160696 logs.go:138] Found kubelet problem: Jul 01 22:54:25 kubernetes-upgrade-20220701225105-10066 kubelet[3842]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:25.914018 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:54:25.914046 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:54:25.934795 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:54:25.934832 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:54:25.993364 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:54:25.993387 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:54:25.993398 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:54:26.036597 160696 logs.go:123] Gathering logs for container status ...
I0701 22:54:26.036638 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:54:26.071276 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:26.071302 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:54:26.071401 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:54:26.071414 160696 out.go:239] Jul 01 22:54:25 kubernetes-upgrade-20220701225105-10066 kubelet[3842]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:54:25 kubernetes-upgrade-20220701225105-10066 kubelet[3842]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:26.071423 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:26.071428 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:54:36.072175 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:54:36.563262 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:54:36.563462 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:54:36.594253 160696 cri.go:87] found id: ""
I0701 22:54:36.594277 160696 logs.go:274] 0 containers: []
W0701 22:54:36.594283 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:54:36.594289 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:54:36.594329 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:54:36.616366 160696 cri.go:87] found id: ""
I0701 22:54:36.616388 160696 logs.go:274] 0 containers: []
W0701 22:54:36.616394 160696 logs.go:276] No container was found matching "etcd"
I0701 22:54:36.616401 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:54:36.616445 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:54:36.649662 160696 cri.go:87] found id: ""
I0701 22:54:36.649688 160696 logs.go:274] 0 containers: []
W0701 22:54:36.649702 160696 logs.go:276] No container was found matching "coredns"
I0701 22:54:36.649711 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:54:36.649761 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:54:36.679021 160696 cri.go:87] found id: ""
I0701 22:54:36.679049 160696 logs.go:274] 0 containers: []
W0701 22:54:36.679058 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:54:36.679066 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:54:36.679120 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:54:36.705720 160696 cri.go:87] found id: ""
I0701 22:54:36.705750 160696 logs.go:274] 0 containers: []
W0701 22:54:36.705758 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:54:36.705770 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:54:36.705811 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:54:36.732054 160696 cri.go:87] found id: ""
I0701 22:54:36.732083 160696 logs.go:274] 0 containers: []
W0701 22:54:36.732093 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:54:36.732103 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:54:36.732165 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:54:36.761779 160696 cri.go:87] found id: ""
I0701 22:54:36.761806 160696 logs.go:274] 0 containers: []
W0701 22:54:36.761815 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:54:36.761825 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:54:36.761876 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:54:36.786589 160696 cri.go:87] found id: ""
I0701 22:54:36.786611 160696 logs.go:274] 0 containers: []
W0701 22:54:36.786617 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:54:36.786626 160696 logs.go:123] Gathering logs for container status ...
I0701 22:54:36.786639 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:54:36.812309 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:54:36.812341 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:54:36.877299 160696 logs.go:138] Found kubelet problem: Jul 01 22:54:36 kubernetes-upgrade-20220701225105-10066 kubelet[4081]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:36.928347 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:54:36.928394 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:54:36.948193 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:54:36.948231 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:54:37.025127 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:54:37.025156 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:54:37.025172 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:54:37.075243 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:37.075275 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:54:37.075415 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:54:37.075433 160696 out.go:239] Jul 01 22:54:36 kubernetes-upgrade-20220701225105-10066 kubelet[4081]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:54:36 kubernetes-upgrade-20220701225105-10066 kubelet[4081]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:37.075449 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:37.075463 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:54:47.076890 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:54:47.562452 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:54:47.562512 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:54:47.595437 160696 cri.go:87] found id: ""
I0701 22:54:47.595465 160696 logs.go:274] 0 containers: []
W0701 22:54:47.595475 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:54:47.595482 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:54:47.595538 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:54:47.621071 160696 cri.go:87] found id: ""
I0701 22:54:47.621094 160696 logs.go:274] 0 containers: []
W0701 22:54:47.621102 160696 logs.go:276] No container was found matching "etcd"
I0701 22:54:47.621109 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:54:47.621152 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:54:47.648248 160696 cri.go:87] found id: ""
I0701 22:54:47.648269 160696 logs.go:274] 0 containers: []
W0701 22:54:47.648274 160696 logs.go:276] No container was found matching "coredns"
I0701 22:54:47.648280 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:54:47.648329 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:54:47.676799 160696 cri.go:87] found id: ""
I0701 22:54:47.676828 160696 logs.go:274] 0 containers: []
W0701 22:54:47.676836 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:54:47.676844 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:54:47.676896 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:54:47.700394 160696 cri.go:87] found id: ""
I0701 22:54:47.700418 160696 logs.go:274] 0 containers: []
W0701 22:54:47.700426 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:54:47.700434 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:54:47.700486 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:54:47.739622 160696 cri.go:87] found id: ""
I0701 22:54:47.739654 160696 logs.go:274] 0 containers: []
W0701 22:54:47.739662 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:54:47.739671 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:54:47.739724 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:54:47.763796 160696 cri.go:87] found id: ""
I0701 22:54:47.763820 160696 logs.go:274] 0 containers: []
W0701 22:54:47.763826 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:54:47.763833 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:54:47.763889 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:54:47.786674 160696 cri.go:87] found id: ""
I0701 22:54:47.786717 160696 logs.go:274] 0 containers: []
W0701 22:54:47.786726 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:54:47.786736 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:54:47.786746 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:54:47.838033 160696 logs.go:138] Found kubelet problem: Jul 01 22:54:47 kubernetes-upgrade-20220701225105-10066 kubelet[4428]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:47.887699 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:54:47.887726 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:54:47.902663 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:54:47.902698 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:54:47.958610 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:54:47.958638 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:54:47.958651 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:54:48.011007 160696 logs.go:123] Gathering logs for container status ...
I0701 22:54:48.011038 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:54:48.037572 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:48.037607 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:54:48.037738 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:54:48.037765 160696 out.go:239] Jul 01 22:54:47 kubernetes-upgrade-20220701225105-10066 kubelet[4428]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:54:47 kubernetes-upgrade-20220701225105-10066 kubelet[4428]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:48.037784 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:48.037793 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:54:58.038971 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:54:58.062879 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:54:58.062967 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:54:58.088606 160696 cri.go:87] found id: ""
I0701 22:54:58.088638 160696 logs.go:274] 0 containers: []
W0701 22:54:58.088647 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:54:58.088654 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:54:58.088709 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:54:58.111118 160696 cri.go:87] found id: ""
I0701 22:54:58.111148 160696 logs.go:274] 0 containers: []
W0701 22:54:58.111158 160696 logs.go:276] No container was found matching "etcd"
I0701 22:54:58.111167 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:54:58.111221 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:54:58.133450 160696 cri.go:87] found id: ""
I0701 22:54:58.133472 160696 logs.go:274] 0 containers: []
W0701 22:54:58.133478 160696 logs.go:276] No container was found matching "coredns"
I0701 22:54:58.133491 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:54:58.133545 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:54:58.155591 160696 cri.go:87] found id: ""
I0701 22:54:58.155612 160696 logs.go:274] 0 containers: []
W0701 22:54:58.155618 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:54:58.155625 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:54:58.155669 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:54:58.178502 160696 cri.go:87] found id: ""
I0701 22:54:58.178531 160696 logs.go:274] 0 containers: []
W0701 22:54:58.178559 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:54:58.178568 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:54:58.178617 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:54:58.202856 160696 cri.go:87] found id: ""
I0701 22:54:58.202886 160696 logs.go:274] 0 containers: []
W0701 22:54:58.202894 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:54:58.202902 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:54:58.202956 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:54:58.232058 160696 cri.go:87] found id: ""
I0701 22:54:58.232084 160696 logs.go:274] 0 containers: []
W0701 22:54:58.232091 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:54:58.232097 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:54:58.232145 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:54:58.254778 160696 cri.go:87] found id: ""
I0701 22:54:58.254810 160696 logs.go:274] 0 containers: []
W0701 22:54:58.254819 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:54:58.254830 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:54:58.254843 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:54:58.289770 160696 logs.go:123] Gathering logs for container status ...
I0701 22:54:58.289798 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:54:58.315832 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:54:58.315859 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:54:58.366123 160696 logs.go:138] Found kubelet problem: Jul 01 22:54:58 kubernetes-upgrade-20220701225105-10066 kubelet[4741]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:58.410935 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:54:58.410962 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:54:58.424934 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:54:58.424960 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:54:58.473003 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:54:58.473031 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:58.473041 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:54:58.473141 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:54:58.473156 160696 out.go:239] Jul 01 22:54:58 kubernetes-upgrade-20220701225105-10066 kubelet[4741]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:54:58 kubernetes-upgrade-20220701225105-10066 kubelet[4741]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:54:58.473162 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:54:58.473173 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:55:08.474941 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:55:08.562590 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:55:08.562672 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:55:08.586407 160696 cri.go:87] found id: ""
I0701 22:55:08.586436 160696 logs.go:274] 0 containers: []
W0701 22:55:08.586444 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:55:08.586452 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:55:08.586505 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:55:08.608738 160696 cri.go:87] found id: ""
I0701 22:55:08.608766 160696 logs.go:274] 0 containers: []
W0701 22:55:08.608774 160696 logs.go:276] No container was found matching "etcd"
I0701 22:55:08.608782 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:55:08.608821 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:55:08.633416 160696 cri.go:87] found id: ""
I0701 22:55:08.633436 160696 logs.go:274] 0 containers: []
W0701 22:55:08.633442 160696 logs.go:276] No container was found matching "coredns"
I0701 22:55:08.633448 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:55:08.633489 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:55:08.655497 160696 cri.go:87] found id: ""
I0701 22:55:08.655516 160696 logs.go:274] 0 containers: []
W0701 22:55:08.655522 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:55:08.655527 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:55:08.655568 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:55:08.679218 160696 cri.go:87] found id: ""
I0701 22:55:08.679240 160696 logs.go:274] 0 containers: []
W0701 22:55:08.679249 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:55:08.679256 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:55:08.679304 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:55:08.704513 160696 cri.go:87] found id: ""
I0701 22:55:08.704535 160696 logs.go:274] 0 containers: []
W0701 22:55:08.704543 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:55:08.704551 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:55:08.704598 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:55:08.729573 160696 cri.go:87] found id: ""
I0701 22:55:08.729604 160696 logs.go:274] 0 containers: []
W0701 22:55:08.729612 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:55:08.729619 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:55:08.729723 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:55:08.753043 160696 cri.go:87] found id: ""
I0701 22:55:08.753068 160696 logs.go:274] 0 containers: []
W0701 22:55:08.753074 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:55:08.753082 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:55:08.753091 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:55:08.796915 160696 logs.go:138] Found kubelet problem: Jul 01 22:55:08 kubernetes-upgrade-20220701225105-10066 kubelet[5036]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:08.842381 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:55:08.842408 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:55:08.857534 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:55:08.857567 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:55:08.906371 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:55:08.906394 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:55:08.906406 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:55:08.942796 160696 logs.go:123] Gathering logs for container status ...
I0701 22:55:08.942824 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:55:08.968098 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:08.968125 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:55:08.968222 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:55:08.968235 160696 out.go:239] Jul 01 22:55:08 kubernetes-upgrade-20220701225105-10066 kubelet[5036]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:55:08 kubernetes-upgrade-20220701225105-10066 kubelet[5036]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:08.968239 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:08.968245 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:55:18.968798 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:55:19.063356 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:55:19.063428 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:55:19.093011 160696 cri.go:87] found id: ""
I0701 22:55:19.093035 160696 logs.go:274] 0 containers: []
W0701 22:55:19.093041 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:55:19.093047 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:55:19.093090 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:55:19.119221 160696 cri.go:87] found id: ""
I0701 22:55:19.119297 160696 logs.go:274] 0 containers: []
W0701 22:55:19.119318 160696 logs.go:276] No container was found matching "etcd"
I0701 22:55:19.119327 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:55:19.119383 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:55:19.143965 160696 cri.go:87] found id: ""
I0701 22:55:19.143987 160696 logs.go:274] 0 containers: []
W0701 22:55:19.143994 160696 logs.go:276] No container was found matching "coredns"
I0701 22:55:19.144001 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:55:19.144051 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:55:19.166639 160696 cri.go:87] found id: ""
I0701 22:55:19.166668 160696 logs.go:274] 0 containers: []
W0701 22:55:19.166688 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:55:19.166697 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:55:19.166754 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:55:19.192145 160696 cri.go:87] found id: ""
I0701 22:55:19.192171 160696 logs.go:274] 0 containers: []
W0701 22:55:19.192179 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:55:19.192192 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:55:19.192254 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:55:19.218856 160696 cri.go:87] found id: ""
I0701 22:55:19.218882 160696 logs.go:274] 0 containers: []
W0701 22:55:19.218891 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:55:19.218898 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:55:19.218948 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:55:19.243276 160696 cri.go:87] found id: ""
I0701 22:55:19.243296 160696 logs.go:274] 0 containers: []
W0701 22:55:19.243302 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:55:19.243308 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:55:19.243353 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:55:19.266676 160696 cri.go:87] found id: ""
I0701 22:55:19.266704 160696 logs.go:274] 0 containers: []
W0701 22:55:19.266713 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:55:19.266724 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:55:19.266737 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:55:19.318352 160696 logs.go:138] Found kubelet problem: Jul 01 22:55:19 kubernetes-upgrade-20220701225105-10066 kubelet[5330]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:19.363471 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:55:19.363497 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:55:19.377394 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:55:19.377418 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:55:19.428638 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:55:19.428661 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:55:19.428676 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:55:19.465329 160696 logs.go:123] Gathering logs for container status ...
I0701 22:55:19.465361 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:55:19.490929 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:19.490952 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:55:19.491049 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:55:19.491061 160696 out.go:239] Jul 01 22:55:19 kubernetes-upgrade-20220701225105-10066 kubelet[5330]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:55:19 kubernetes-upgrade-20220701225105-10066 kubelet[5330]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:19.491068 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:19.491073 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:55:29.491548 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:55:29.562433 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:55:29.562499 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:55:29.586547 160696 cri.go:87] found id: ""
I0701 22:55:29.586571 160696 logs.go:274] 0 containers: []
W0701 22:55:29.586580 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:55:29.586587 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:55:29.586636 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:55:29.611066 160696 cri.go:87] found id: ""
I0701 22:55:29.611100 160696 logs.go:274] 0 containers: []
W0701 22:55:29.611108 160696 logs.go:276] No container was found matching "etcd"
I0701 22:55:29.611116 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:55:29.611169 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:55:29.636857 160696 cri.go:87] found id: ""
I0701 22:55:29.636885 160696 logs.go:274] 0 containers: []
W0701 22:55:29.636894 160696 logs.go:276] No container was found matching "coredns"
I0701 22:55:29.636902 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:55:29.636951 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:55:29.676798 160696 cri.go:87] found id: ""
I0701 22:55:29.676827 160696 logs.go:274] 0 containers: []
W0701 22:55:29.676835 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:55:29.676843 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:55:29.676895 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:55:29.708940 160696 cri.go:87] found id: ""
I0701 22:55:29.708971 160696 logs.go:274] 0 containers: []
W0701 22:55:29.708980 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:55:29.708986 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:55:29.709036 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:55:29.740654 160696 cri.go:87] found id: ""
I0701 22:55:29.740680 160696 logs.go:274] 0 containers: []
W0701 22:55:29.740689 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:55:29.740697 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:55:29.740747 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:55:29.769352 160696 cri.go:87] found id: ""
I0701 22:55:29.769380 160696 logs.go:274] 0 containers: []
W0701 22:55:29.769390 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:55:29.769397 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:55:29.769446 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:55:29.793398 160696 cri.go:87] found id: ""
I0701 22:55:29.793423 160696 logs.go:274] 0 containers: []
W0701 22:55:29.793432 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:55:29.793443 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:55:29.793462 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:55:29.843283 160696 logs.go:138] Found kubelet problem: Jul 01 22:55:29 kubernetes-upgrade-20220701225105-10066 kubelet[5615]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:29.891720 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:55:29.891748 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:55:29.906601 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:55:29.906635 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:55:29.961916 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:55:29.961945 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:55:29.961959 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:55:30.015559 160696 logs.go:123] Gathering logs for container status ...
I0701 22:55:30.015594 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:55:30.046203 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:30.046234 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:55:30.046364 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:55:30.046384 160696 out.go:239] Jul 01 22:55:29 kubernetes-upgrade-20220701225105-10066 kubelet[5615]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:55:29 kubernetes-upgrade-20220701225105-10066 kubelet[5615]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:30.046391 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:30.046401 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:55:40.046837 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:55:40.062927 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:55:40.062993 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:55:40.086083 160696 cri.go:87] found id: ""
I0701 22:55:40.086105 160696 logs.go:274] 0 containers: []
W0701 22:55:40.086112 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:55:40.086117 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:55:40.086164 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:55:40.111953 160696 cri.go:87] found id: ""
I0701 22:55:40.111976 160696 logs.go:274] 0 containers: []
W0701 22:55:40.111982 160696 logs.go:276] No container was found matching "etcd"
I0701 22:55:40.111988 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:55:40.112031 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:55:40.135730 160696 cri.go:87] found id: ""
I0701 22:55:40.135752 160696 logs.go:274] 0 containers: []
W0701 22:55:40.135758 160696 logs.go:276] No container was found matching "coredns"
I0701 22:55:40.135766 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:55:40.135818 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:55:40.159395 160696 cri.go:87] found id: ""
I0701 22:55:40.159420 160696 logs.go:274] 0 containers: []
W0701 22:55:40.159426 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:55:40.159432 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:55:40.159484 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:55:40.183672 160696 cri.go:87] found id: ""
I0701 22:55:40.183698 160696 logs.go:274] 0 containers: []
W0701 22:55:40.183707 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:55:40.183714 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:55:40.183763 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:55:40.212650 160696 cri.go:87] found id: ""
I0701 22:55:40.212677 160696 logs.go:274] 0 containers: []
W0701 22:55:40.212684 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:55:40.212691 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:55:40.212741 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:55:40.240724 160696 cri.go:87] found id: ""
I0701 22:55:40.240750 160696 logs.go:274] 0 containers: []
W0701 22:55:40.240757 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:55:40.240765 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:55:40.240817 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:55:40.263440 160696 cri.go:87] found id: ""
I0701 22:55:40.263465 160696 logs.go:274] 0 containers: []
W0701 22:55:40.263473 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:55:40.263483 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:55:40.263495 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:55:40.307528 160696 logs.go:138] Found kubelet problem: Jul 01 22:55:40 kubernetes-upgrade-20220701225105-10066 kubelet[5915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:40.352597 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:55:40.352628 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:55:40.366682 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:55:40.366708 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:55:40.415340 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:55:40.415366 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:55:40.415379 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:55:40.453970 160696 logs.go:123] Gathering logs for container status ...
I0701 22:55:40.454007 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:55:40.482132 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:40.482214 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:55:40.482359 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:55:40.482374 160696 out.go:239] Jul 01 22:55:40 kubernetes-upgrade-20220701225105-10066 kubelet[5915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:55:40 kubernetes-upgrade-20220701225105-10066 kubelet[5915]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:40.482379 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:40.482385 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:55:50.483245 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:55:50.562586 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:55:50.562662 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:55:50.592410 160696 cri.go:87] found id: ""
I0701 22:55:50.592432 160696 logs.go:274] 0 containers: []
W0701 22:55:50.592441 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:55:50.592448 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:55:50.592498 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:55:50.615057 160696 cri.go:87] found id: ""
I0701 22:55:50.615081 160696 logs.go:274] 0 containers: []
W0701 22:55:50.615090 160696 logs.go:276] No container was found matching "etcd"
I0701 22:55:50.615098 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:55:50.615146 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:55:50.648585 160696 cri.go:87] found id: ""
I0701 22:55:50.648613 160696 logs.go:274] 0 containers: []
W0701 22:55:50.648621 160696 logs.go:276] No container was found matching "coredns"
I0701 22:55:50.648630 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:55:50.648679 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:55:50.678338 160696 cri.go:87] found id: ""
I0701 22:55:50.678365 160696 logs.go:274] 0 containers: []
W0701 22:55:50.678374 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:55:50.678381 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:55:50.678456 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:55:50.704520 160696 cri.go:87] found id: ""
I0701 22:55:50.704546 160696 logs.go:274] 0 containers: []
W0701 22:55:50.704555 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:55:50.704562 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:55:50.704616 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:55:50.748816 160696 cri.go:87] found id: ""
I0701 22:55:50.748838 160696 logs.go:274] 0 containers: []
W0701 22:55:50.748846 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:55:50.748853 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:55:50.748902 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:55:50.778493 160696 cri.go:87] found id: ""
I0701 22:55:50.778522 160696 logs.go:274] 0 containers: []
W0701 22:55:50.778530 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:55:50.778570 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:55:50.778627 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:55:50.807443 160696 cri.go:87] found id: ""
I0701 22:55:50.807468 160696 logs.go:274] 0 containers: []
W0701 22:55:50.807474 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:55:50.807482 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:55:50.807495 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:55:50.878177 160696 logs.go:138] Found kubelet problem: Jul 01 22:55:50 kubernetes-upgrade-20220701225105-10066 kubelet[6197]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:50.941462 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:55:50.941496 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:55:50.960430 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:55:50.960484 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:55:51.028941 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:55:51.028968 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:55:51.028981 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:55:51.088918 160696 logs.go:123] Gathering logs for container status ...
I0701 22:55:51.088957 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:55:51.129064 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:51.129090 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:55:51.129192 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:55:51.129206 160696 out.go:239] Jul 01 22:55:50 kubernetes-upgrade-20220701225105-10066 kubelet[6197]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:55:50 kubernetes-upgrade-20220701225105-10066 kubelet[6197]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:55:51.129213 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:55:51.129220 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:56:01.129928 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:56:01.563135 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:56:01.563218 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:56:01.592695 160696 cri.go:87] found id: ""
I0701 22:56:01.592722 160696 logs.go:274] 0 containers: []
W0701 22:56:01.592731 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:56:01.592738 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:56:01.592793 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:56:01.619252 160696 cri.go:87] found id: ""
I0701 22:56:01.619281 160696 logs.go:274] 0 containers: []
W0701 22:56:01.619292 160696 logs.go:276] No container was found matching "etcd"
I0701 22:56:01.619300 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:56:01.619352 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:56:01.652542 160696 cri.go:87] found id: ""
I0701 22:56:01.652571 160696 logs.go:274] 0 containers: []
W0701 22:56:01.652581 160696 logs.go:276] No container was found matching "coredns"
I0701 22:56:01.652589 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:56:01.652648 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:56:01.684585 160696 cri.go:87] found id: ""
I0701 22:56:01.684614 160696 logs.go:274] 0 containers: []
W0701 22:56:01.684622 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:56:01.684630 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:56:01.684690 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:56:01.715317 160696 cri.go:87] found id: ""
I0701 22:56:01.715342 160696 logs.go:274] 0 containers: []
W0701 22:56:01.715349 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:56:01.715357 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:56:01.715403 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:56:01.743629 160696 cri.go:87] found id: ""
I0701 22:56:01.743650 160696 logs.go:274] 0 containers: []
W0701 22:56:01.743658 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:56:01.743668 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:56:01.743716 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:56:01.781823 160696 cri.go:87] found id: ""
I0701 22:56:01.781846 160696 logs.go:274] 0 containers: []
W0701 22:56:01.781853 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:56:01.781860 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:56:01.781913 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:56:01.811885 160696 cri.go:87] found id: ""
I0701 22:56:01.811918 160696 logs.go:274] 0 containers: []
W0701 22:56:01.811928 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:56:01.811941 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:56:01.811957 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:56:01.828794 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:56:01.828824 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:56:01.891766 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:56:01.891794 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:56:01.891809 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:56:01.950065 160696 logs.go:123] Gathering logs for container status ...
I0701 22:56:01.950100 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:56:01.984620 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:56:01.984655 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:56:02.048126 160696 logs.go:138] Found kubelet problem: Jul 01 22:56:01 kubernetes-upgrade-20220701225105-10066 kubelet[6433]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:56:02.104666 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:56:02.104696 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:56:02.104822 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:56:02.104836 160696 out.go:239] Jul 01 22:56:01 kubernetes-upgrade-20220701225105-10066 kubelet[6433]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:56:01 kubernetes-upgrade-20220701225105-10066 kubelet[6433]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:56:02.104840 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:56:02.104845 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:56:12.106368 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:56:12.562626 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:56:12.562693 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:56:12.587551 160696 cri.go:87] found id: ""
I0701 22:56:12.587583 160696 logs.go:274] 0 containers: []
W0701 22:56:12.587592 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:56:12.587599 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:56:12.587655 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:56:12.614992 160696 cri.go:87] found id: ""
I0701 22:56:12.615023 160696 logs.go:274] 0 containers: []
W0701 22:56:12.615033 160696 logs.go:276] No container was found matching "etcd"
I0701 22:56:12.615041 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:56:12.615091 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:56:12.641795 160696 cri.go:87] found id: ""
I0701 22:56:12.641828 160696 logs.go:274] 0 containers: []
W0701 22:56:12.641840 160696 logs.go:276] No container was found matching "coredns"
I0701 22:56:12.641849 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:56:12.641904 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:56:12.671467 160696 cri.go:87] found id: ""
I0701 22:56:12.671490 160696 logs.go:274] 0 containers: []
W0701 22:56:12.671496 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:56:12.671501 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:56:12.671539 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:56:12.695033 160696 cri.go:87] found id: ""
I0701 22:56:12.695061 160696 logs.go:274] 0 containers: []
W0701 22:56:12.695069 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:56:12.695076 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:56:12.695127 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:56:12.724522 160696 cri.go:87] found id: ""
I0701 22:56:12.724550 160696 logs.go:274] 0 containers: []
W0701 22:56:12.724559 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:56:12.724566 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:56:12.724620 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:56:12.748385 160696 cri.go:87] found id: ""
I0701 22:56:12.748409 160696 logs.go:274] 0 containers: []
W0701 22:56:12.748417 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:56:12.748425 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:56:12.748477 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:56:12.770615 160696 cri.go:87] found id: ""
I0701 22:56:12.770637 160696 logs.go:274] 0 containers: []
W0701 22:56:12.770643 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:56:12.770652 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:56:12.770665 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:56:12.817319 160696 logs.go:138] Found kubelet problem: Jul 01 22:56:12 kubernetes-upgrade-20220701225105-10066 kubelet[6722]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:56:12.882185 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:56:12.882217 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:56:12.896191 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:56:12.896214 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:56:12.951475 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:56:12.951499 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:56:12.951509 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:56:12.988895 160696 logs.go:123] Gathering logs for container status ...
I0701 22:56:12.988927 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:56:13.014844 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:56:13.014873 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:56:13.014983 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:56:13.014999 160696 out.go:239] Jul 01 22:56:12 kubernetes-upgrade-20220701225105-10066 kubelet[6722]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:56:12 kubernetes-upgrade-20220701225105-10066 kubelet[6722]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:56:13.015006 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:56:13.015012 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:56:23.015738 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:56:23.063438 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 22:56:23.063513 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 22:56:23.087031 160696 cri.go:87] found id: ""
I0701 22:56:23.087053 160696 logs.go:274] 0 containers: []
W0701 22:56:23.087061 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 22:56:23.087070 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 22:56:23.087113 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 22:56:23.108796 160696 cri.go:87] found id: ""
I0701 22:56:23.108827 160696 logs.go:274] 0 containers: []
W0701 22:56:23.108835 160696 logs.go:276] No container was found matching "etcd"
I0701 22:56:23.108841 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 22:56:23.108881 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 22:56:23.136434 160696 cri.go:87] found id: ""
I0701 22:56:23.136459 160696 logs.go:274] 0 containers: []
W0701 22:56:23.136466 160696 logs.go:276] No container was found matching "coredns"
I0701 22:56:23.136473 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 22:56:23.136521 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 22:56:23.164257 160696 cri.go:87] found id: ""
I0701 22:56:23.164290 160696 logs.go:274] 0 containers: []
W0701 22:56:23.164299 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 22:56:23.164306 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 22:56:23.164349 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 22:56:23.188353 160696 cri.go:87] found id: ""
I0701 22:56:23.188386 160696 logs.go:274] 0 containers: []
W0701 22:56:23.188394 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 22:56:23.188400 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 22:56:23.188444 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 22:56:23.211193 160696 cri.go:87] found id: ""
I0701 22:56:23.211218 160696 logs.go:274] 0 containers: []
W0701 22:56:23.211226 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 22:56:23.211232 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 22:56:23.211283 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 22:56:23.233393 160696 cri.go:87] found id: ""
I0701 22:56:23.233417 160696 logs.go:274] 0 containers: []
W0701 22:56:23.233426 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 22:56:23.233432 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 22:56:23.233477 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 22:56:23.256986 160696 cri.go:87] found id: ""
I0701 22:56:23.257016 160696 logs.go:274] 0 containers: []
W0701 22:56:23.257024 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 22:56:23.257039 160696 logs.go:123] Gathering logs for kubelet ...
I0701 22:56:23.257053 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 22:56:23.303988 160696 logs.go:138] Found kubelet problem: Jul 01 22:56:22 kubernetes-upgrade-20220701225105-10066 kubelet[7020]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:56:23.350427 160696 logs.go:123] Gathering logs for dmesg ...
I0701 22:56:23.350464 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 22:56:23.366689 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 22:56:23.366759 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 22:56:23.414683 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 22:56:23.414716 160696 logs.go:123] Gathering logs for containerd ...
I0701 22:56:23.414732 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 22:56:23.459290 160696 logs.go:123] Gathering logs for container status ...
I0701 22:56:23.459330 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0701 22:56:23.487564 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:56:23.487589 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0701 22:56:23.487699 160696 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0701 22:56:23.487714 160696 out.go:239] Jul 01 22:56:22 kubernetes-upgrade-20220701225105-10066 kubelet[7020]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
Jul 01 22:56:22 kubernetes-upgrade-20220701225105-10066 kubelet[7020]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 22:56:23.487719 160696 out.go:309] Setting ErrFile to fd 2...
I0701 22:56:23.487726 160696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:56:33.489062 160696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 22:56:33.497208 160696 kubeadm.go:630] restartCluster took 4m2.606075472s
W0701 22:56:33.497319 160696 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
I0701 22:56:33.497343 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0701 22:56:34.188962 160696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0701 22:56:34.198405 160696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0701 22:56:34.205377 160696 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0701 22:56:34.205428 160696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 22:56:34.212306 160696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0701 22:56:34.212346 160696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0701 22:56:34.488887 160696 out.go:204] - Generating certificates and keys ...
I0701 22:56:35.222092 160696 out.go:204] - Booting up control plane ...
W0701 22:58:30.234697 160696 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:56:34.249651 7560 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:56:34.249651 7560 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0701 22:58:30.234763 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0701 22:58:31.215309 160696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0701 22:58:31.225392 160696 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0701 22:58:31.225450 160696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 22:58:31.233427 160696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0701 22:58:31.233475 160696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0701 22:58:31.507505 160696 out.go:204] - Generating certificates and keys ...
I0701 22:58:32.633727 160696 out.go:204] - Booting up control plane ...
I0701 23:00:27.646987 160696 kubeadm.go:397] StartCluster complete in 7m56.791378401s
I0701 23:00:27.647038 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 23:00:27.647092 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 23:00:27.670385 160696 cri.go:87] found id: ""
I0701 23:00:27.670408 160696 logs.go:274] 0 containers: []
W0701 23:00:27.670416 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 23:00:27.670424 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 23:00:27.670479 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 23:00:27.695513 160696 cri.go:87] found id: ""
I0701 23:00:27.695537 160696 logs.go:274] 0 containers: []
W0701 23:00:27.695546 160696 logs.go:276] No container was found matching "etcd"
I0701 23:00:27.695555 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 23:00:27.695610 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 23:00:27.718045 160696 cri.go:87] found id: ""
I0701 23:00:27.718072 160696 logs.go:274] 0 containers: []
W0701 23:00:27.718081 160696 logs.go:276] No container was found matching "coredns"
I0701 23:00:27.718088 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 23:00:27.718135 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 23:00:27.742214 160696 cri.go:87] found id: ""
I0701 23:00:27.742241 160696 logs.go:274] 0 containers: []
W0701 23:00:27.742249 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 23:00:27.742257 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 23:00:27.742312 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 23:00:27.764994 160696 cri.go:87] found id: ""
I0701 23:00:27.765033 160696 logs.go:274] 0 containers: []
W0701 23:00:27.765040 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 23:00:27.765047 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 23:00:27.765095 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 23:00:27.787131 160696 cri.go:87] found id: ""
I0701 23:00:27.787155 160696 logs.go:274] 0 containers: []
W0701 23:00:27.787161 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 23:00:27.787166 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 23:00:27.787206 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 23:00:27.809474 160696 cri.go:87] found id: ""
I0701 23:00:27.809497 160696 logs.go:274] 0 containers: []
W0701 23:00:27.809503 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 23:00:27.809508 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 23:00:27.809552 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 23:00:27.832826 160696 cri.go:87] found id: ""
I0701 23:00:27.832850 160696 logs.go:274] 0 containers: []
W0701 23:00:27.832857 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 23:00:27.832867 160696 logs.go:123] Gathering logs for kubelet ...
I0701 23:00:27.832877 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 23:00:27.883957 160696 logs.go:138] Found kubelet problem: Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 23:00:27.939530 160696 logs.go:123] Gathering logs for dmesg ...
I0701 23:00:27.939568 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 23:00:27.959413 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 23:00:27.959491 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 23:00:28.015733 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 23:00:28.015759 160696 logs.go:123] Gathering logs for containerd ...
I0701 23:00:28.015772 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 23:00:28.063280 160696 logs.go:123] Gathering logs for container status ...
I0701 23:00:28.063306 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0701 23:00:28.089939 160696 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:58:31.270801 9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0701 23:00:28.089978 160696 out.go:239] *
*
W0701 23:00:28.090236 160696 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:58:31.270801 9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:58:31.270801 9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0701 23:00:28.090268 160696 out.go:239] *
*
W0701 23:00:28.091045 160696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 23:00:28.093078 160696 out.go:177] X Problems detected in kubelet:
I0701 23:00:28.095148 160696 out.go:177] Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 23:00:28.098679 160696 out.go:177]
W0701 23:00:28.100157 160696 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:58:31.270801 9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:58:31.270801 9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0701 23:00:28.100275 160696 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0701 23:00:28.100315 160696 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0701 23:00:28.102745 160696 out.go:177]
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220701225105-10066 --memory=2200 --kubernetes-version=v1.24.2 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run: kubectl --context kubernetes-upgrade-20220701225105-10066 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220701225105-10066 version --output=json: exit status 1 (50.503185ms)
-- stdout --
{
"clientVersion": {
"major": "1",
"minor": "24",
"gitVersion": "v1.24.2",
"gitCommit": "f66044f4361b9f1f96f0053dd46cb7dce5e990a8",
"gitTreeState": "clean",
"buildDate": "2022-06-15T14:22:29Z",
"goVersion": "go1.18.3",
"compiler": "gc",
"platform": "linux/amd64"
},
"kustomizeVersion": "v4.5.4"
}
-- /stdout --
** stderr **
The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-07-01 23:00:28.257535812 +0000 UTC m=+2217.562473326
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect kubernetes-upgrade-20220701225105-10066
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220701225105-10066:
-- stdout --
[
{
"Id": "6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60",
"Created": "2022-07-01T22:51:17.262095505Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 161168,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-07-01T22:52:01.941680271Z",
"FinishedAt": "2022-07-01T22:51:59.909816228Z"
},
"Image": "sha256:573e7be5768273a7845baee1ae90fa2e33b83b10a7fbb0f0f41efbf29b53d1f1",
"ResolvConfPath": "/var/lib/docker/containers/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60/hostname",
"HostsPath": "/var/lib/docker/containers/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60/hosts",
"LogPath": "/var/lib/docker/containers/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60/6bb642abc37bb087095bb19e950542a27f883db2c629a53cbae2b3f9ebfb5f60-json.log",
"Name": "/kubernetes-upgrade-20220701225105-10066",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"kubernetes-upgrade-20220701225105-10066:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "kubernetes-upgrade-20220701225105-10066",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/088a5282bd64f53bf175ddb9ebf1d64415dace5742a4b38574e1ef7ebf3fb1be-init/diff:/var/lib/docker/overlay2/edb6d17bdc5cabdb1fac75f829fa6b696b451285a56daf994d10472ac162d1fa/diff:/var/lib/docker/overlay2/12553db54f9401cb3cd4ea304c82950d708bec4ffbeafa05f361df03389b23f5/diff:/var/lib/docker/overlay2/7e3fcd2813dd5acb24030b1419b7a8a53ea2329287a62c3cd166eb5e47617a4e/diff:/var/lib/docker/overlay2/15f31da888d327ea20972c47badf360a35e5d2b17a3fae8f69cb0d0d1db2bfa0/diff:/var/lib/docker/overlay2/4d7fcdaa5483b16cc999e79056aae74f5814a7aca964696154ac6ec31eebe508/diff:/var/lib/docker/overlay2/9f3a7dbc3c70fba7c77f5d8e54e394ac0c37123defe036d76af7d048d08b974e/diff:/var/lib/docker/overlay2/4f0b0b0343fd3ac1a6d5f774e91228b75f1e33f810714d09d38ecd1e38cfe250/diff:/var/lib/docker/overlay2/bfd684b2b6e2cfe1f30901dbf02332074c65e296b386b083ea38c10128b42115/diff:/var/lib/docker/overlay2/d05e53b39ab1962ee7dd1a17c7c6df7e7db940db5a19b0d9312ccafaf3c67afc/diff:/var/lib/docker/overlay2/7a24a3
8f96e1aff9985bf714d962a37888eced9c4b8a3018e499080634306871/diff:/var/lib/docker/overlay2/b54ee85e20ecd71a593d453d879a26c40869049e30ba5b18b4c021c4cf667ec6/diff:/var/lib/docker/overlay2/cc9f249b45cce30a5a0432a644435d68e765dc097e3bfdbde41b7d725f258866/diff:/var/lib/docker/overlay2/dac381b24f77e5de76295d05ca2d4ae6a611d31efdf28050dc8acbc16f893037/diff:/var/lib/docker/overlay2/fe9cd2517ce90b2330101b987c76605b9d73f40b8c967e33975ad22bd3e427df/diff:/var/lib/docker/overlay2/34da7eb6e789c53ba8f86e0d904243d3314979a523044238ab8b7e9e95f685d4/diff:/var/lib/docker/overlay2/44c3ffe75f70e4d9d5d17064609a4a167961cdca81aab03b2a72dfbe05d09f41/diff:/var/lib/docker/overlay2/76feeb878d0278a48230287307ce3a01000291c1284d2222d114e37836ebc153/diff:/var/lib/docker/overlay2/3ab5eb284edb3402610bb5d4fbd0d8c3fc6b53fd39d2a66cc3a9bb1e313fe5ee/diff:/var/lib/docker/overlay2/7604bb52dbaf4e2c359f0b2b11aa345bbf33498b0346044f4c6ff3178a3e8df5/diff:/var/lib/docker/overlay2/0a19abc50663f9812511b849524a7625a16f183fa64aff5a0a9c1c7664cc2027/diff:/var/lib/d
ocker/overlay2/00d25c59a62da71857955ae29c7c803b77cb4c97b23913f166c62440a3b63456/diff:/var/lib/docker/overlay2/cd6c06e5c4d7bdfb5a78ffcf6c0cf331a6101b713bd238ad5c0ab119ad599bf4/diff:/var/lib/docker/overlay2/73ae644d477b17687dc53b7176c9e05aa10e0e0cc22b0d2adbd96493280d2696/diff:/var/lib/docker/overlay2/1490b9c59b00b3ee6078671a8023960891ee2bfc21bfe92c12a71c87ea04fff1/diff:/var/lib/docker/overlay2/42f40e76b67a2f74340f543b0beab44f98b10e0a0903d4006a173272c52a31d0/diff:/var/lib/docker/overlay2/648763e18e8325f20ff26dd88a1788a2d9e575a9ca24f44b2430918381add179/diff:/var/lib/docker/overlay2/955b13af9b6a054abdcd0dfdf71ce57d6da1b5718bee40c52fc7739b4ff16496/diff:/var/lib/docker/overlay2/1e55f6bfa306e6667808548601e508c0335b7021c9c6791ee5ed3df2514aff12/diff:/var/lib/docker/overlay2/ed1fd4b4489438557a54639b2ed769eb0371193cd0129b222edea8f79274f7e8/diff:/var/lib/docker/overlay2/c4058f514a3e669de2d6139c7bc94d4187679acabf288e44e25b5d50db5b14ac/diff:/var/lib/docker/overlay2/f55c5f04c4a64f585a9b90e7f23e5d5142014f06b3bf877739879029b76
b574f/diff:/var/lib/docker/overlay2/74b9141dda92027ee02e8b3abc4bb05ed13ce01d58f4c3d644e7235e411a863e/diff:/var/lib/docker/overlay2/53dd59b4b1dfa382f4b296dffd27b84e247851098ea1c719fa3b8ed3e158e73d/diff:/var/lib/docker/overlay2/01e3082bf93042f963fa4d0b92796bd5a1f3663e87f24a76a6fb5f3dabdb3d2f/diff:/var/lib/docker/overlay2/6b96d2f01ba7bf16240d81397be753c74fa1eb78889ce36d1a88224496f5ee64/diff:/var/lib/docker/overlay2/1c326eb1859e3f119094b72e595fe89b3be1895193b0f9124ec3aa7be78fa0ff/diff:/var/lib/docker/overlay2/9c31f48fe49160b4b3d824b787329ea1d50133fd86b0c7f2054bebed2bd136b4/diff:/var/lib/docker/overlay2/6b78f7313fcedc631cc839c84f80e065670f47ccc8e548578527bec6b0f2cce3/diff:/var/lib/docker/overlay2/6bf15a167fccdef650b6e6b0daa521189e2c744a9a41640bc0e963421f139437/diff:/var/lib/docker/overlay2/b01230a9beae37bded75410c1a3849626080918fdc36535a1f4092f9227a7ccf/diff:/var/lib/docker/overlay2/793ce1632cff927c8e2528b23a62c94ed141cb52d5df04225085072c61ca5bb7/diff",
"MergedDir": "/var/lib/docker/overlay2/088a5282bd64f53bf175ddb9ebf1d64415dace5742a4b38574e1ef7ebf3fb1be/merged",
"UpperDir": "/var/lib/docker/overlay2/088a5282bd64f53bf175ddb9ebf1d64415dace5742a4b38574e1ef7ebf3fb1be/diff",
"WorkDir": "/var/lib/docker/overlay2/088a5282bd64f53bf175ddb9ebf1d64415dace5742a4b38574e1ef7ebf3fb1be/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "kubernetes-upgrade-20220701225105-10066",
"Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220701225105-10066/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "kubernetes-upgrade-20220701225105-10066",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220701225105-10066",
"name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220701225105-10066",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "04455c6897a67958cb3d112305394e24b2d4bc35b3b66e559f619d57fe81e2e1",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49338"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49337"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49334"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49336"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49335"
}
]
},
"SandboxKey": "/var/run/docker/netns/04455c6897a6",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"kubernetes-upgrade-20220701225105-10066": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": [
"6bb642abc37b",
"kubernetes-upgrade-20220701225105-10066"
],
"NetworkID": "3bc5e9344b9b90b1679edbd09c9063fb186936a7f0aaa6c9c5a8168603edf88b",
"EndpointID": "45867237600cc1d7b13018c1669df28c5974e752d70fcc8f2b27bc7c61aa4d8d",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220701225105-10066 -n kubernetes-upgrade-20220701225105-10066
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220701225105-10066 -n kubernetes-upgrade-20220701225105-10066: exit status 2 (404.53817ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-20220701225105-10066 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
| | cert-expiration-20220701225121-10066 | | | | | |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
| | cert-expiration-20220701225121-10066 | | | | | |
| start | -p calico-20220701225121-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:56 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=calico --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
| | kindnet-20220701225120-10066 | | | | | |
| | pgrep -a kubelet | | | | | |
| ssh | -p auto-20220701225119-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:55 UTC |
| | pgrep -a kubelet | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:55 UTC | 01 Jul 22 22:56 UTC |
| | kindnet-20220701225120-10066 | | | | | |
| delete | -p auto-20220701225119-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | |
| | enable-default-cni-20220701225120-10066 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --enable-default-cni=true | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p bridge-20220701225120-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=bridge --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p bridge-20220701225120-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
| | pgrep -a kubelet | | | | | |
| ssh | -p calico-20220701225121-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
| | pgrep -a kubelet | | | | | |
| delete | -p calico-20220701225121-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:56 UTC |
| delete | -p bridge-20220701225120-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:57 UTC |
| start | -p cilium-20220701225121-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:56 UTC | 01 Jul 22 22:58 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=cilium --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | |
| | old-k8s-version-20220701225700-10066 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| ssh | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 22:57 UTC |
| | enable-default-cni-20220701225120-10066 | | | | | |
| | pgrep -a kubelet | | | | | |
| delete | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | 01 Jul 22 22:57 UTC |
| | enable-default-cni-20220701225120-10066 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:57 UTC | |
| | no-preload-20220701225718-10066 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.2 | | | | | |
| ssh | -p cilium-20220701225121-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:58 UTC |
| | pgrep -a kubelet | | | | | |
| delete | -p cilium-20220701225121-10066 | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:58 UTC |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:58 UTC | 01 Jul 22 22:59 UTC |
| | embed-certs-20220701225830-10066 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --embed-certs | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.2 | | | | | |
| addons | enable metrics-server -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
| | embed-certs-20220701225830-10066 | | | | | |
| | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
| | embed-certs-20220701225830-10066 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | 01 Jul 22 22:59 UTC |
| | embed-certs-20220701225830-10066 | | | | | |
| | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 | | | | | |
| start | -p | minikube | jenkins | v1.26.0 | 01 Jul 22 22:59 UTC | |
| | embed-certs-20220701225830-10066 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --embed-certs | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.2 | | | | | |
|---------|---------------------------------------------------|----------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/01 22:59:58
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.18.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0701 22:59:58.270911 235408 out.go:296] Setting OutFile to fd 1 ...
I0701 22:59:58.271044 235408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:59:58.271055 235408 out.go:309] Setting ErrFile to fd 2...
I0701 22:59:58.271060 235408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0701 22:59:58.271550 235408 root.go:329] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/bin
I0701 22:59:58.271787 235408 out.go:303] Setting JSON to false
I0701 22:59:58.273819 235408 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2551,"bootTime":1656713847,"procs":1296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1012-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0701 22:59:58.273890 235408 start.go:125] virtualization: kvm guest
I0701 22:59:58.276339 235408 out.go:177] * [embed-certs-20220701225830-10066] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
I0701 22:59:58.278020 235408 out.go:177] - MINIKUBE_LOCATION=14483
I0701 22:59:58.277941 235408 notify.go:193] Checking for updates...
I0701 22:59:58.279654 235408 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 22:59:58.281170 235408 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 22:59:58.282568 235408 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube
I0701 22:59:58.284168 235408 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0701 22:59:58.286596 235408 config.go:178] Loaded profile config "embed-certs-20220701225830-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
I0701 22:59:58.287647 235408 driver.go:360] Setting default libvirt URI to qemu:///system
I0701 22:59:58.329907 235408 docker.go:137] docker version: linux-20.10.17
I0701 22:59:58.330245 235408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:59:58.438728 235408 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:65 SystemTime:2022-07-01 22:59:58.361863628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:59:58.438835 235408 docker.go:254] overlay module found
I0701 22:59:58.441052 235408 out.go:177] * Using the docker driver based on existing profile
I0701 22:59:58.442662 235408 start.go:284] selected driver: docker
I0701 22:59:58.442683 235408 start.go:808] validating driver "docker" against &{Name:embed-certs-20220701225830-10066 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220701225830-10066 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:59:58.442785 235408 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 22:59:58.443603 235408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0701 22:59:58.550264 235408 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:65 SystemTime:2022-07-01 22:59:58.473008189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1012-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662447616 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-2 Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0701 22:59:58.550632 235408 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0701 22:59:58.550662 235408 cni.go:95] Creating CNI manager for ""
I0701 22:59:58.550671 235408 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0701 22:59:58.550681 235408 start_flags.go:310] config:
{Name:embed-certs-20220701225830-10066 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220701225830-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 22:59:58.553050 235408 out.go:177] * Starting control plane node embed-certs-20220701225830-10066 in cluster embed-certs-20220701225830-10066
I0701 22:59:58.554461 235408 cache.go:120] Beginning downloading kic base image for docker with containerd
I0701 22:59:58.555785 235408 out.go:177] * Pulling base image ...
I0701 22:59:58.557082 235408 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
I0701 22:59:58.557119 235408 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon
I0701 22:59:58.557122 235408 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4
I0701 22:59:58.557220 235408 cache.go:57] Caching tarball of preloaded images
I0701 22:59:58.557426 235408 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 22:59:58.557449 235408 cache.go:60] Finished verifying existence of preloaded tar for v1.24.2 on containerd
I0701 22:59:58.557546 235408 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/config.json ...
I0701 22:59:58.592438 235408 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e in local docker daemon, skipping pull
I0701 22:59:58.592464 235408 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e exists in daemon, skipping load
I0701 22:59:58.592485 235408 cache.go:208] Successfully downloaded all kic artifacts
I0701 22:59:58.592532 235408 start.go:352] acquiring machines lock for embed-certs-20220701225830-10066: {Name:mk7700ad3a5ae6c33755b1735ad652e63d9ad7e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 22:59:58.592631 235408 start.go:356] acquired machines lock for "embed-certs-20220701225830-10066" in 75.226µs
I0701 22:59:58.592654 235408 start.go:94] Skipping create...Using existing machine configuration
I0701 22:59:58.592662 235408 fix.go:55] fixHost starting:
I0701 22:59:58.592902 235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
I0701 22:59:58.627478 235408 fix.go:103] recreateIfNeeded on embed-certs-20220701225830-10066: state=Stopped err=<nil>
W0701 22:59:58.627515 235408 fix.go:129] unexpected machine state, will restart: <nil>
I0701 22:59:58.629575 235408 out.go:177] * Restarting existing docker container for "embed-certs-20220701225830-10066" ...
I0701 22:59:55.580108 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 22:59:58.079689 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 22:59:57.050728 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 22:59:59.051359 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 22:59:58.630835 235408 cli_runner.go:164] Run: docker start embed-certs-20220701225830-10066
I0701 22:59:59.018434 235408 cli_runner.go:164] Run: docker container inspect embed-certs-20220701225830-10066 --format={{.State.Status}}
I0701 22:59:59.055830 235408 kic.go:416] container "embed-certs-20220701225830-10066" state is running.
I0701 22:59:59.056143 235408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220701225830-10066
I0701 22:59:59.091907 235408 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/config.json ...
I0701 22:59:59.092126 235408 machine.go:88] provisioning docker machine ...
I0701 22:59:59.092152 235408 ubuntu.go:169] provisioning hostname "embed-certs-20220701225830-10066"
I0701 22:59:59.092194 235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
I0701 22:59:59.126469 235408 main.go:134] libmachine: Using SSH client type: native
I0701 22:59:59.126692 235408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49412 <nil> <nil>}
I0701 22:59:59.126726 235408 main.go:134] libmachine: About to run SSH command:
sudo hostname embed-certs-20220701225830-10066 && echo "embed-certs-20220701225830-10066" | sudo tee /etc/hostname
I0701 22:59:59.127378 235408 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34518->127.0.0.1:49412: read: connection reset by peer
I0701 23:00:02.250726 235408 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220701225830-10066
I0701 23:00:02.250819 235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
I0701 23:00:02.285023 235408 main.go:134] libmachine: Using SSH client type: native
I0701 23:00:02.285162 235408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil> [] 0s} 127.0.0.1 49412 <nil> <nil>}
I0701 23:00:02.285182 235408 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-20220701225830-10066' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220701225830-10066/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-20220701225830-10066' | sudo tee -a /etc/hosts;
fi
fi
I0701 23:00:02.398118 235408 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0701 23:00:02.398172 235408 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube}
I0701 23:00:02.398207 235408 ubuntu.go:177] setting up certificates
I0701 23:00:02.398218 235408 provision.go:83] configureAuth start
I0701 23:00:02.398280 235408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220701225830-10066
I0701 23:00:02.434428 235408 provision.go:138] copyHostCerts
I0701 23:00:02.434495 235408 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem, removing ...
I0701 23:00:02.434514 235408 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem
I0701 23:00:02.434613 235408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.pem (1078 bytes)
I0701 23:00:02.434703 235408 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem, removing ...
I0701 23:00:02.434716 235408 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem
I0701 23:00:02.434755 235408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/cert.pem (1123 bytes)
I0701 23:00:02.434825 235408 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem, removing ...
I0701 23:00:02.434835 235408 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem
I0701 23:00:02.434867 235408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/key.pem (1679 bytes)
I0701 23:00:02.434929 235408 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220701225830-10066 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220701225830-10066]
I0701 23:00:02.558945 235408 provision.go:172] copyRemoteCerts
I0701 23:00:02.558992 235408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 23:00:02.559031 235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
I0701 23:00:02.594795 235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
I0701 23:00:02.681946 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 23:00:02.699903 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
I0701 23:00:02.717597 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0701 23:00:02.734964 235408 provision.go:86] duration metric: configureAuth took 336.727262ms
I0701 23:00:02.734990 235408 ubuntu.go:193] setting minikube options for container-runtime
I0701 23:00:02.735182 235408 config.go:178] Loaded profile config "embed-certs-20220701225830-10066": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.2
I0701 23:00:02.735198 235408 machine.go:91] provisioned docker machine in 3.643056522s
I0701 23:00:02.735207 235408 start.go:306] post-start starting for "embed-certs-20220701225830-10066" (driver="docker")
I0701 23:00:02.735214 235408 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 23:00:02.735263 235408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 23:00:02.735300 235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
I0701 23:00:02.768989 235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
I0701 23:00:02.853823 235408 ssh_runner.go:195] Run: cat /etc/os-release
I0701 23:00:02.856393 235408 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0701 23:00:02.856413 235408 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0701 23:00:02.856421 235408 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0701 23:00:02.856427 235408 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0701 23:00:02.856435 235408 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/addons for local assets ...
I0701 23:00:02.856509 235408 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files for local assets ...
I0701 23:00:02.856593 235408 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem -> 100662.pem in /etc/ssl/certs
I0701 23:00:02.856667 235408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 23:00:02.863109 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /etc/ssl/certs/100662.pem (1708 bytes)
I0701 23:00:02.879759 235408 start.go:309] post-start completed in 144.541927ms
I0701 23:00:02.879824 235408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0701 23:00:02.879861 235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
I0701 23:00:02.912798 235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
I0701 23:00:02.994828 235408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0701 23:00:02.998675 235408 fix.go:57] fixHost completed within 4.406009666s
I0701 23:00:02.998715 235408 start.go:81] releasing machines lock for "embed-certs-20220701225830-10066", held for 4.406070232s
I0701 23:00:02.998811 235408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220701225830-10066
I0701 23:00:03.033064 235408 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0701 23:00:03.033113 235408 ssh_runner.go:195] Run: systemctl --version
I0701 23:00:03.033138 235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
I0701 23:00:03.033145 235408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220701225830-10066
I0701 23:00:03.071007 235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
I0701 23:00:03.071411 235408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/machines/embed-certs-20220701225830-10066/id_rsa Username:docker}
I0701 23:00:03.171479 235408 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0701 23:00:03.182465 235408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 23:00:03.191354 235408 docker.go:179] disabling docker service ...
I0701 23:00:03.191394 235408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0701 23:00:03.200968 235408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0701 23:00:03.209599 235408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0701 23:00:00.079994 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:02.580583 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:01.550421 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:03.550712 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:03.281130 235408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0701 23:00:03.352077 235408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0701 23:00:03.360716 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 23:00:03.372565 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
I0701 23:00:03.380204 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
I0701 23:00:03.388476 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
I0701 23:00:03.395966 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
I0701 23:00:03.403330 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
I0701 23:00:03.410688 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %!s(MISSING) "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
I0701 23:00:03.423058 235408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0701 23:00:03.429283 235408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0701 23:00:03.435491 235408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 23:00:03.505856 235408 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0701 23:00:03.577732 235408 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
I0701 23:00:03.577803 235408 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0701 23:00:03.581813 235408 start.go:471] Will wait 60s for crictl version
I0701 23:00:03.581857 235408 ssh_runner.go:195] Run: sudo crictl version
I0701 23:00:03.606591 235408 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-07-01T23:00:03Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0701 23:00:05.079847 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:07.080009 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:05.551452 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:08.050717 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:10.051095 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:09.580529 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:12.079913 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:14.654115 235408 ssh_runner.go:195] Run: sudo crictl version
I0701 23:00:14.679131 235408 start.go:480] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.6
RuntimeApiVersion: v1alpha2
I0701 23:00:14.679204 235408 ssh_runner.go:195] Run: containerd --version
I0701 23:00:14.711721 235408 ssh_runner.go:195] Run: containerd --version
I0701 23:00:14.745266 235408 out.go:177] * Preparing Kubernetes v1.24.2 on containerd 1.6.6 ...
I0701 23:00:12.550577 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:14.551246 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:14.747069 235408 cli_runner.go:164] Run: docker network inspect embed-certs-20220701225830-10066 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0701 23:00:14.779930 235408 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0701 23:00:14.783360 235408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 23:00:14.792794 235408 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime containerd
I0701 23:00:14.792859 235408 ssh_runner.go:195] Run: sudo crictl images --output json
I0701 23:00:14.816892 235408 containerd.go:547] all images are preloaded for containerd runtime.
I0701 23:00:14.816918 235408 containerd.go:461] Images already preloaded, skipping extraction
I0701 23:00:14.816965 235408 ssh_runner.go:195] Run: sudo crictl images --output json
I0701 23:00:14.840318 235408 containerd.go:547] all images are preloaded for containerd runtime.
I0701 23:00:14.840340 235408 cache_images.go:84] Images are preloaded, skipping loading
I0701 23:00:14.840388 235408 ssh_runner.go:195] Run: sudo crictl info
I0701 23:00:14.863691 235408 cni.go:95] Creating CNI manager for ""
I0701 23:00:14.863713 235408 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0701 23:00:14.863722 235408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0701 23:00:14.863734 235408 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220701225830-10066 NodeName:embed-certs-20220701225830-10066 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0701 23:00:14.863881 235408 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "embed-certs-20220701225830-10066"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0701 23:00:14.863974 235408 kubeadm.go:961] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220701225830-10066 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220701225830-10066 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0701 23:00:14.864027 235408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
I0701 23:00:14.870925 235408 binaries.go:44] Found k8s binaries, skipping transfer
I0701 23:00:14.870977 235408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0701 23:00:14.877458 235408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (525 bytes)
I0701 23:00:14.889840 235408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0701 23:00:14.902235 235408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
I0701 23:00:14.914307 235408 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0701 23:00:14.916993 235408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 23:00:14.925664 235408 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066 for IP: 192.168.67.2
I0701 23:00:14.925828 235408 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key
I0701 23:00:14.925883 235408 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key
I0701 23:00:14.925961 235408 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/client.key
I0701 23:00:14.926035 235408 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/apiserver.key.c7fa3a9e
I0701 23:00:14.926082 235408 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/proxy-client.key
I0701 23:00:14.926207 235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem (1338 bytes)
W0701 23:00:14.926248 235408 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066_empty.pem, impossibly tiny 0 bytes
I0701 23:00:14.926265 235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca-key.pem (1675 bytes)
I0701 23:00:14.926300 235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/ca.pem (1078 bytes)
I0701 23:00:14.926332 235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/cert.pem (1123 bytes)
I0701 23:00:14.926365 235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/key.pem (1679 bytes)
I0701 23:00:14.926418 235408 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem (1708 bytes)
I0701 23:00:14.927102 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0701 23:00:14.943560 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0701 23:00:14.959838 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0701 23:00:14.976190 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/profiles/embed-certs-20220701225830-10066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0701 23:00:14.992281 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0701 23:00:15.008766 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0701 23:00:15.025447 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0701 23:00:15.041939 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0701 23:00:15.058617 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/certs/10066.pem --> /usr/share/ca-certificates/10066.pem (1338 bytes)
I0701 23:00:15.075031 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/files/etc/ssl/certs/100662.pem --> /usr/share/ca-certificates/100662.pem (1708 bytes)
I0701 23:00:15.092212 235408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0701 23:00:15.108891 235408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0701 23:00:15.121155 235408 ssh_runner.go:195] Run: openssl version
I0701 23:00:15.125902 235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10066.pem && ln -fs /usr/share/ca-certificates/10066.pem /etc/ssl/certs/10066.pem"
I0701 23:00:15.132941 235408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10066.pem
I0701 23:00:15.136196 235408 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 1 22:28 /usr/share/ca-certificates/10066.pem
I0701 23:00:15.136235 235408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10066.pem
I0701 23:00:15.141121 235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10066.pem /etc/ssl/certs/51391683.0"
I0701 23:00:15.147658 235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/100662.pem && ln -fs /usr/share/ca-certificates/100662.pem /etc/ssl/certs/100662.pem"
I0701 23:00:15.154815 235408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/100662.pem
I0701 23:00:15.157595 235408 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 1 22:28 /usr/share/ca-certificates/100662.pem
I0701 23:00:15.157636 235408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/100662.pem
I0701 23:00:15.162343 235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/100662.pem /etc/ssl/certs/3ec20f2e.0"
I0701 23:00:15.168728 235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 23:00:15.176282 235408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 23:00:15.180512 235408 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 1 22:24 /usr/share/ca-certificates/minikubeCA.pem
I0701 23:00:15.180548 235408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 23:00:15.185322 235408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 23:00:15.191675 235408 kubeadm.go:395] StartCluster: {Name:embed-certs-20220701225830-10066 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656350719-14420@sha256:e7b7f38d1a2eba7828afc2c4c3d24e1d391db431976e47aa6dc5c7a6b038ca4e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:embed-certs-20220701225830-10066 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0701 23:00:15.191758 235408 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0701 23:00:15.191786 235408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0701 23:00:15.215440 235408 cri.go:87] found id: "4c8b29014183db39e6e2c6554ad01479a632031579c0dac495b9f3ceaaea1d9b"
I0701 23:00:15.215466 235408 cri.go:87] found id: "49a774dd0d2f4dd600ad636dedcea2ce9e046364efd3f8e4f66da657abf03b05"
I0701 23:00:15.215477 235408 cri.go:87] found id: "f35534e686cebfe1b85d62262e785437085a14d2dccd701df9ab3e7ffe6c9853"
I0701 23:00:15.215487 235408 cri.go:87] found id: "b4ff3be1324b2ff9d0e3c3afb1e3b7cba48800827cfb704564ef12f4bcbdaf7c"
I0701 23:00:15.215494 235408 cri.go:87] found id: "6d6e8c270009a98182ab6c35e55de13e554cba58cd81593ce561846bba7660aa"
I0701 23:00:15.215502 235408 cri.go:87] found id: "176b0e6372260ee0ace52d369f37120ba201373efd26256fbb77b72bcbbfebee"
I0701 23:00:15.215511 235408 cri.go:87] found id: "8c2b6a995c0337ca44537ad290318b99291d36f96367f55ac0b76fc3e31a7a97"
I0701 23:00:15.215520 235408 cri.go:87] found id: "d8f18479d258f2f42ede3fa10c5307b530b037e94aaab9308182a9572ba396cf"
I0701 23:00:15.215525 235408 cri.go:87] found id: ""
I0701 23:00:15.215565 235408 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0701 23:00:15.227285 235408 cri.go:114] JSON = null
W0701 23:00:15.227333 235408 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 8
I0701 23:00:15.227379 235408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0701 23:00:15.233945 235408 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0701 23:00:15.233963 235408 kubeadm.go:626] restartCluster start
I0701 23:00:15.233999 235408 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0701 23:00:15.240484 235408 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0701 23:00:15.241174 235408 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220701225830-10066" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig
I0701 23:00:15.241716 235408 kubeconfig.go:127] "embed-certs-20220701225830-10066" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig - will repair!
I0701 23:00:15.242489 235408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14483-3503-eda35dcdf4d9484a0cc92ca083f199dac3b6d9e6/kubeconfig: {Name:mk1cabec5fbd11121d3270a69bbde1ee0f95e8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 23:00:15.243993 235408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0701 23:00:15.250252 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:15.250300 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:15.257716 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:15.458122 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:15.458203 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:15.467814 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:15.658250 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:15.658328 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:15.667311 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:15.858625 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:15.858694 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:15.868475 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:16.058762 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:16.058855 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:16.067282 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:16.258588 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:16.258665 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:16.267883 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:16.458192 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:16.458251 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:16.466832 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:16.658110 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:16.658191 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:16.666792 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:16.857872 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:16.857930 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:16.866428 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:17.058705 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:17.058775 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:17.067392 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:17.258667 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:17.258750 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:17.267816 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:17.458072 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:17.458136 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:17.466836 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:17.658111 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:17.658168 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:17.666551 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:17.858833 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:17.858907 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:17.867469 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:18.058731 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:18.058787 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:18.067326 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:18.258823 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:18.258921 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0701 23:00:18.267872 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:18.267889 235408 api_server.go:165] Checking apiserver status ...
I0701 23:00:18.267919 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 23:00:14.579397 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:16.579551 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:17.050422 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:19.050716 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
W0701 23:00:18.276149 235408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0701 23:00:18.276175 235408 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0701 23:00:18.276181 235408 kubeadm.go:1092] stopping kube-system containers ...
I0701 23:00:18.276192 235408 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0701 23:00:18.276229 235408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0701 23:00:18.300183 235408 cri.go:87] found id: "4c8b29014183db39e6e2c6554ad01479a632031579c0dac495b9f3ceaaea1d9b"
I0701 23:00:18.300207 235408 cri.go:87] found id: "49a774dd0d2f4dd600ad636dedcea2ce9e046364efd3f8e4f66da657abf03b05"
I0701 23:00:18.300219 235408 cri.go:87] found id: "f35534e686cebfe1b85d62262e785437085a14d2dccd701df9ab3e7ffe6c9853"
I0701 23:00:18.300227 235408 cri.go:87] found id: "b4ff3be1324b2ff9d0e3c3afb1e3b7cba48800827cfb704564ef12f4bcbdaf7c"
I0701 23:00:18.300236 235408 cri.go:87] found id: "6d6e8c270009a98182ab6c35e55de13e554cba58cd81593ce561846bba7660aa"
I0701 23:00:18.300246 235408 cri.go:87] found id: "176b0e6372260ee0ace52d369f37120ba201373efd26256fbb77b72bcbbfebee"
I0701 23:00:18.300261 235408 cri.go:87] found id: "8c2b6a995c0337ca44537ad290318b99291d36f96367f55ac0b76fc3e31a7a97"
I0701 23:00:18.300276 235408 cri.go:87] found id: "d8f18479d258f2f42ede3fa10c5307b530b037e94aaab9308182a9572ba396cf"
I0701 23:00:18.300290 235408 cri.go:87] found id: ""
I0701 23:00:18.300301 235408 cri.go:232] Stopping containers: [4c8b29014183db39e6e2c6554ad01479a632031579c0dac495b9f3ceaaea1d9b 49a774dd0d2f4dd600ad636dedcea2ce9e046364efd3f8e4f66da657abf03b05 f35534e686cebfe1b85d62262e785437085a14d2dccd701df9ab3e7ffe6c9853 b4ff3be1324b2ff9d0e3c3afb1e3b7cba48800827cfb704564ef12f4bcbdaf7c 6d6e8c270009a98182ab6c35e55de13e554cba58cd81593ce561846bba7660aa 176b0e6372260ee0ace52d369f37120ba201373efd26256fbb77b72bcbbfebee 8c2b6a995c0337ca44537ad290318b99291d36f96367f55ac0b76fc3e31a7a97 d8f18479d258f2f42ede3fa10c5307b530b037e94aaab9308182a9572ba396cf]
I0701 23:00:18.300363 235408 ssh_runner.go:195] Run: which crictl
I0701 23:00:18.303065 235408 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 4c8b29014183db39e6e2c6554ad01479a632031579c0dac495b9f3ceaaea1d9b 49a774dd0d2f4dd600ad636dedcea2ce9e046364efd3f8e4f66da657abf03b05 f35534e686cebfe1b85d62262e785437085a14d2dccd701df9ab3e7ffe6c9853 b4ff3be1324b2ff9d0e3c3afb1e3b7cba48800827cfb704564ef12f4bcbdaf7c 6d6e8c270009a98182ab6c35e55de13e554cba58cd81593ce561846bba7660aa 176b0e6372260ee0ace52d369f37120ba201373efd26256fbb77b72bcbbfebee 8c2b6a995c0337ca44537ad290318b99291d36f96367f55ac0b76fc3e31a7a97 d8f18479d258f2f42ede3fa10c5307b530b037e94aaab9308182a9572ba396cf
I0701 23:00:18.328625 235408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0701 23:00:18.338646 235408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0701 23:00:18.345594 235408 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Jul 1 22:58 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Jul 1 22:58 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2067 Jul 1 22:59 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Jul 1 22:58 /etc/kubernetes/scheduler.conf
I0701 23:00:18.345647 235408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0701 23:00:18.352504 235408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0701 23:00:18.359262 235408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0701 23:00:18.365558 235408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0701 23:00:18.365599 235408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0701 23:00:18.371746 235408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0701 23:00:18.378024 235408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0701 23:00:18.378069 235408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0701 23:00:18.384176 235408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0701 23:00:18.390610 235408 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0701 23:00:18.390640 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0701 23:00:18.434898 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0701 23:00:18.994369 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0701 23:00:19.180726 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0701 23:00:19.241499 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0701 23:00:19.337746 235408 api_server.go:51] waiting for apiserver process to appear ...
I0701 23:00:19.337809 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 23:00:19.848374 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 23:00:20.348263 235408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0701 23:00:20.427417 235408 api_server.go:71] duration metric: took 1.089672892s to wait for apiserver process to appear ...
I0701 23:00:20.427468 235408 api_server.go:87] waiting for apiserver healthz status ...
I0701 23:00:20.427483 235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 23:00:20.427896 235408 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
I0701 23:00:20.928117 235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 23:00:19.080121 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:21.080325 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:23.579390 220277 node_ready.go:58] node "no-preload-20220701225718-10066" has status "Ready":"False"
I0701 23:00:21.051135 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:23.550711 215661 pod_ready.go:102] pod "coredns-5644d7b6d9-s46dh" in "kube-system" namespace has status "Ready":"False"
I0701 23:00:24.113066 235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0701 23:00:24.113104 235408 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0701 23:00:24.428460 235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 23:00:24.433753 235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0701 23:00:24.433794 235408 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0701 23:00:24.928886 235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 23:00:24.934737 235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0701 23:00:24.934757 235408 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0701 23:00:25.428106 235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 23:00:25.433474 235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0701 23:00:25.433504 235408 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0701 23:00:25.928790 235408 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
I0701 23:00:25.937308 235408 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
ok
I0701 23:00:25.944868 235408 api_server.go:140] control plane version: v1.24.2
I0701 23:00:25.944889 235408 api_server.go:130] duration metric: took 5.517413571s to wait for apiserver health ...
I0701 23:00:25.944898 235408 cni.go:95] Creating CNI manager for ""
I0701 23:00:25.944905 235408 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
I0701 23:00:25.946852 235408 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0701 23:00:27.646987 160696 kubeadm.go:397] StartCluster complete in 7m56.791378401s
I0701 23:00:27.647038 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0701 23:00:27.647092 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0701 23:00:27.670385 160696 cri.go:87] found id: ""
I0701 23:00:27.670408 160696 logs.go:274] 0 containers: []
W0701 23:00:27.670416 160696 logs.go:276] No container was found matching "kube-apiserver"
I0701 23:00:27.670424 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0701 23:00:27.670479 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0701 23:00:27.695513 160696 cri.go:87] found id: ""
I0701 23:00:27.695537 160696 logs.go:274] 0 containers: []
W0701 23:00:27.695546 160696 logs.go:276] No container was found matching "etcd"
I0701 23:00:27.695555 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0701 23:00:27.695610 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0701 23:00:27.718045 160696 cri.go:87] found id: ""
I0701 23:00:27.718072 160696 logs.go:274] 0 containers: []
W0701 23:00:27.718081 160696 logs.go:276] No container was found matching "coredns"
I0701 23:00:27.718088 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0701 23:00:27.718135 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0701 23:00:27.742214 160696 cri.go:87] found id: ""
I0701 23:00:27.742241 160696 logs.go:274] 0 containers: []
W0701 23:00:27.742249 160696 logs.go:276] No container was found matching "kube-scheduler"
I0701 23:00:27.742257 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0701 23:00:27.742312 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0701 23:00:27.764994 160696 cri.go:87] found id: ""
I0701 23:00:27.765033 160696 logs.go:274] 0 containers: []
W0701 23:00:27.765040 160696 logs.go:276] No container was found matching "kube-proxy"
I0701 23:00:27.765047 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0701 23:00:27.765095 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0701 23:00:27.787131 160696 cri.go:87] found id: ""
I0701 23:00:27.787155 160696 logs.go:274] 0 containers: []
W0701 23:00:27.787161 160696 logs.go:276] No container was found matching "kubernetes-dashboard"
I0701 23:00:27.787166 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0701 23:00:27.787206 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0701 23:00:27.809474 160696 cri.go:87] found id: ""
I0701 23:00:27.809497 160696 logs.go:274] 0 containers: []
W0701 23:00:27.809503 160696 logs.go:276] No container was found matching "storage-provisioner"
I0701 23:00:27.809508 160696 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0701 23:00:27.809552 160696 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0701 23:00:27.832826 160696 cri.go:87] found id: ""
I0701 23:00:27.832850 160696 logs.go:274] 0 containers: []
W0701 23:00:27.832857 160696 logs.go:276] No container was found matching "kube-controller-manager"
I0701 23:00:27.832867 160696 logs.go:123] Gathering logs for kubelet ...
I0701 23:00:27.832877 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0701 23:00:27.883957 160696 logs.go:138] Found kubelet problem: Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 23:00:27.939530 160696 logs.go:123] Gathering logs for dmesg ...
I0701 23:00:27.939568 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0701 23:00:27.959413 160696 logs.go:123] Gathering logs for describe nodes ...
I0701 23:00:27.959491 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0701 23:00:28.015733 160696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0701 23:00:28.015759 160696 logs.go:123] Gathering logs for containerd ...
I0701 23:00:28.015772 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0701 23:00:28.063280 160696 logs.go:123] Gathering logs for container status ...
I0701 23:00:28.063306 160696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0701 23:00:28.089939 160696 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:58:31.270801 9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0701 23:00:28.089978 160696 out.go:239] *
W0701 23:00:28.090236 160696 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:58:31.270801 9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0701 23:00:28.090268 160696 out.go:239] *
W0701 23:00:28.091045 160696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 23:00:28.093078 160696 out.go:177] X Problems detected in kubelet:
I0701 23:00:28.095148 160696 out.go:177] Jul 01 23:00:27 kubernetes-upgrade-20220701225105-10066 kubelet[11544]: Error: failed to parse kubelet flag: unknown flag: --cni-conf-dir
I0701 23:00:28.098679 160696 out.go:177]
W0701 23:00:28.100157 160696 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1012-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0701 22:58:31.270801 9663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1012-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0701 23:00:28.100275 160696 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0701 23:00:28.100315 160696 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I0701 23:00:28.102745 160696 out.go:177]
I0701 23:00:25.948230 235408 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0701 23:00:25.953260 235408 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.2/kubectl ...
I0701 23:00:25.953282 235408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0701 23:00:26.018580 235408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0701 23:00:26.773418 235408 system_pods.go:43] waiting for kube-system pods to appear ...
I0701 23:00:26.779956 235408 system_pods.go:59] 9 kube-system pods found
I0701 23:00:26.779990 235408 system_pods.go:61] "coredns-6d4b75cb6d-vlp9g" [98c71b38-f849-4e40-91c2-ab549594fa28] Running
I0701 23:00:26.780001 235408 system_pods.go:61] "etcd-embed-certs-20220701225830-10066" [5b2adfb2-7c61-4309-8413-cf8f61b7eff2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0701 23:00:26.780011 235408 system_pods.go:61] "kindnet-q2kq6" [849ea186-f716-4f3f-a313-c59e4ab27965] Running
I0701 23:00:26.780019 235408 system_pods.go:61] "kube-apiserver-embed-certs-20220701225830-10066" [6799dad8-6269-4162-974d-76bbd12c1345] Running
I0701 23:00:26.780024 235408 system_pods.go:61] "kube-controller-manager-embed-certs-20220701225830-10066" [1961bf23-e285-4fbb-af22-2051d4b05d07] Running
I0701 23:00:26.780036 235408 system_pods.go:61] "kube-proxy-njxjm" [c3b911f8-f812-4a74-a5ea-7798a0120fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0701 23:00:26.780043 235408 system_pods.go:61] "kube-scheduler-embed-certs-20220701225830-10066" [383a49e0-3ccd-43e6-a46b-3b314a3facc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0701 23:00:26.780054 235408 system_pods.go:61] "metrics-server-5c6f97fb75-nss5q" [c332f30d-8215-4761-a271-dbfdb476a516] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0701 23:00:26.780060 235408 system_pods.go:61] "storage-provisioner" [71d9493b-f59c-4466-acf5-ffa6c1753183] Running
I0701 23:00:26.780069 235408 system_pods.go:74] duration metric: took 6.629693ms to wait for pod list to return data ...
I0701 23:00:26.780077 235408 node_conditions.go:102] verifying NodePressure condition ...
I0701 23:00:26.782515 235408 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0701 23:00:26.782572 235408 node_conditions.go:123] node cpu capacity is 8
I0701 23:00:26.782586 235408 node_conditions.go:105] duration metric: took 2.50322ms to run NodePressure ...
I0701 23:00:26.782611 235408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0701 23:00:26.906104 235408 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0701 23:00:26.909845 235408 kubeadm.go:777] kubelet initialised
I0701 23:00:26.909869 235408 kubeadm.go:778] duration metric: took 3.711665ms waiting for restarted kubelet to initialise ...
I0701 23:00:26.909876 235408 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0701 23:00:26.915175 235408 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-vlp9g" in "kube-system" namespace to be "Ready" ...
I0701 23:00:26.919176 235408 pod_ready.go:92] pod "coredns-6d4b75cb6d-vlp9g" in "kube-system" namespace has status "Ready":"True"
I0701 23:00:26.919194 235408 pod_ready.go:81] duration metric: took 3.994773ms waiting for pod "coredns-6d4b75cb6d-vlp9g" in "kube-system" namespace to be "Ready" ...
I0701 23:00:26.919204 235408 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220701225830-10066" in "kube-system" namespace to be "Ready" ...
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
*
* ==> containerd <==
* -- Logs begin at Fri 2022-07-01 22:52:02 UTC, end at Fri 2022-07-01 23:00:29 UTC. --
Jul 01 22:58:30 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:30.996387992Z" level=error msg="StopPodSandbox for \"\\\"Using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"\\\"Using\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.013449630Z" level=info msg="StopPodSandbox for \"this\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.013516445Z" level=error msg="StopPodSandbox for \"this\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"this\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.031328044Z" level=info msg="StopPodSandbox for \"endpoint\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.031394450Z" level=error msg="StopPodSandbox for \"endpoint\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.052124884Z" level=info msg="StopPodSandbox for \"is\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.052184496Z" level=error msg="StopPodSandbox for \"is\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"is\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.069414849Z" level=info msg="StopPodSandbox for \"deprecated,\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.069468124Z" level=error msg="StopPodSandbox for \"deprecated,\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"deprecated,\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.087742883Z" level=info msg="StopPodSandbox for \"please\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.087810238Z" level=error msg="StopPodSandbox for \"please\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"please\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.105697085Z" level=info msg="StopPodSandbox for \"consider\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.105763365Z" level=error msg="StopPodSandbox for \"consider\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"consider\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.123360405Z" level=info msg="StopPodSandbox for \"using\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.123421603Z" level=error msg="StopPodSandbox for \"using\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"using\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.140416302Z" level=info msg="StopPodSandbox for \"full\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.140469655Z" level=error msg="StopPodSandbox for \"full\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"full\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.157391426Z" level=info msg="StopPodSandbox for \"URL\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.157445545Z" level=error msg="StopPodSandbox for \"URL\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.175061524Z" level=info msg="StopPodSandbox for \"format\\\"\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.175124766Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.191816770Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.191866680Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.208616519Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
Jul 01 22:58:31 kubernetes-upgrade-20220701225105-10066 containerd[492]: time="2022-07-01T22:58:31.208676568Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +0.007942] FS-Cache: N-cookie d=00000000de7c5649{9p.inode} n=00000000ed85478f
[ +0.008742] FS-Cache: N-key=[8] '84a00f0200000000'
[ +0.440350] FS-Cache: Duplicate cookie detected
[ +0.004678] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006759] FS-Cache: O-cookie d=00000000de7c5649{9p.inode} n=000000000ba03907
[ +0.007365] FS-Cache: O-key=[8] '8ea00f0200000000'
[ +0.004953] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.008025] FS-Cache: N-cookie d=00000000de7c5649{9p.inode} n=00000000dd0fdb1e
[ +0.008650] FS-Cache: N-key=[8] '8ea00f0200000000'
[Jul 1 22:31] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jul 1 22:51] process 'docker/tmp/qemu-check843609603/check' started with executable stack
[Jul 1 22:56] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff da 5a 07 89 70 97 08 06
[ +9.422376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 ec 04 d9 67 12 08 06
[ +0.001554] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 e8 f5 ab 62 77 08 06
[ +4.219906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 34 d0 5a db d2 08 06
[ +0.000387] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff da 5a 07 89 70 97 08 06
[Jul 1 22:57] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 f6 a0 f9 35 79 08 06
[ +0.000417] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 12 ec 04 d9 67 12 08 06
*
* ==> kernel <==
* 23:00:29 up 43 min, 0 users, load average: 1.80, 3.38, 2.53
Linux kubernetes-upgrade-20220701225105-10066 5.15.0-1012-gcp #17~20.04.1-Ubuntu SMP Thu Jun 23 16:10:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kubelet <==
* -- Logs begin at Fri 2022-07-01 22:52:02 UTC, end at Fri 2022-07-01 23:00:29 UTC. --
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --storage-driver-buffer-duration duration Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction (default 1m0s) (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --storage-driver-db string database name (default "cadvisor") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --storage-driver-host string database host:port (default "localhost:8086") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --storage-driver-password string database password (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --storage-driver-secure use secure connection with database (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --storage-driver-table string table name (default "stats") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --storage-driver-user string database username (default "root") (DEPRECATED: This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --streaming-connection-idle-timeout duration Maximum time a streaming connection can be idle before the connection is automatically closed. 0 indicates no timeout. Example: '5m'. Note: All connections to the kubelet server have a maximum duration of 4 hours. (default 4h0m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --sync-frequency duration Max period between synchronizing running containers and config (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --system-cgroups string Optional absolute name of cgroups in which to place all non-kernel processes that are not already inside a cgroup under '/'. Empty for no container. Rolling back the flag requires a reboot. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --system-reserved mapStringString A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more detail. [default=none] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --system-reserved-cgroup string Absolute name of the top level cgroup that is used to manage non-kubernetes components for which compute resources were reserved via '--system-reserved' flag. Ex. '/system-reserved'. [default=''] (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --tls-cert-file string File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --tls-private-key-file string File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --topology-manager-policy string Topology Manager policy to use. Possible values: 'none', 'best-effort', 'restricted', 'single-numa-node'. (default "none") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --topology-manager-scope string Scope to which topology hints applied. Topology Manager collects hints from Hint Providers and applies them to defined scope to ensure the pod admission. Possible values: 'container', 'pod'. (default "container") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: -v, --v Level number for the log level verbosity
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --version version[=true] Print version information and quit
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --vmodule pattern=N,... comma-separated list of pattern=N settings for file-filtered logging (only works for text log format)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --volume-plugin-dir string The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 01 23:00:28 kubernetes-upgrade-20220701225105-10066 kubelet[11714]: --volume-stats-agg-period duration Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to a negative number. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
-- /stdout --
** stderr **
E0701 23:00:29.354323 238679 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220701225105-10066 -n kubernetes-upgrade-20220701225105-10066
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220701225105-10066 -n kubernetes-upgrade-20220701225105-10066: exit status 2 (397.214ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-20220701225105-10066" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220701225105-10066" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220701225105-10066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220701225105-10066: (2.196388978s)
--- FAIL: TestKubernetesUpgrade (566.28s)