=== RUN TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (24.015486067s)
version_upgrade_test.go:227: (dbg) Run: out/minikube-linux-amd64 stop -p kubernetes-upgrade-578123
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-578123: (1.289835017s)
version_upgrade_test.go:232: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-578123 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-578123 status --format={{.Host}}: exit status 7 (81.038648ms)
-- stdout --
Stopped
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (23.55975104s)
version_upgrade_test.go:248: (dbg) Run: kubectl --context kubernetes-upgrade-578123 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker --container-runtime=containerd: exit status 106 (73.881939ms)
-- stdout --
* [kubernetes-upgrade-578123] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21738
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21738-136141/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-136141/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
-- /stdout --
** stderr **
X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
* Suggestion:
1) Recreate the cluster with Kubernetes 1.28.0, by running:
minikube delete -p kubernetes-upgrade-578123
minikube start -p kubernetes-upgrade-578123 --kubernetes-version=v1.28.0
2) Create a second cluster with Kubernetes 1.28.0, by running:
minikube start -p kubernetes-upgrade-5781232 --kubernetes-version=v1.28.0
3) Use the existing cluster at version Kubernetes 1.34.1, by running:
minikube start -p kubernetes-upgrade-578123 --kubernetes-version=v1.34.1
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 80 (6m25.959686732s)
-- stdout --
* [kubernetes-upgrade-578123] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21738
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21738-136141/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-136141/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on existing profile
* Starting "kubernetes-upgrade-578123" primary control-plane node in "kubernetes-upgrade-578123" cluster
* Pulling base image v0.0.48-1760363564-21724 ...
* Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons:
-- /stdout --
** stderr **
I1016 18:16:54.629898 356761 out.go:360] Setting OutFile to fd 1 ...
I1016 18:16:54.630199 356761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:16:54.630209 356761 out.go:374] Setting ErrFile to fd 2...
I1016 18:16:54.630214 356761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:16:54.630431 356761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-136141/.minikube/bin
I1016 18:16:54.630947 356761 out.go:368] Setting JSON to false
I1016 18:16:54.632033 356761 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3550,"bootTime":1760635065,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1016 18:16:54.632151 356761 start.go:141] virtualization: kvm guest
I1016 18:16:54.633942 356761 out.go:179] * [kubernetes-upgrade-578123] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1016 18:16:54.635139 356761 out.go:179] - MINIKUBE_LOCATION=21738
I1016 18:16:54.635173 356761 notify.go:220] Checking for updates...
I1016 18:16:54.637211 356761 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1016 18:16:54.638400 356761 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21738-136141/kubeconfig
I1016 18:16:54.639455 356761 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-136141/.minikube
I1016 18:16:54.640462 356761 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1016 18:16:54.641520 356761 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1016 18:16:54.643156 356761 config.go:182] Loaded profile config "kubernetes-upgrade-578123": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1016 18:16:54.643823 356761 driver.go:421] Setting default libvirt URI to qemu:///system
I1016 18:16:54.669324 356761 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
I1016 18:16:54.669435 356761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1016 18:16:54.729622 356761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:16:54.719285527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1016 18:16:54.729734 356761 docker.go:318] overlay module found
I1016 18:16:54.733324 356761 out.go:179] * Using the docker driver based on existing profile
I1016 18:16:54.734524 356761 start.go:305] selected driver: docker
I1016 18:16:54.734542 356761 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-578123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-578123 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1016 18:16:54.734674 356761 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1016 18:16:54.735397 356761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1016 18:16:54.796329 356761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:16:54.786524492 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1016 18:16:54.796588 356761 cni.go:84] Creating CNI manager for ""
I1016 18:16:54.796640 356761 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1016 18:16:54.796669 356761 start.go:349] cluster config:
{Name:kubernetes-upgrade-578123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-578123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1016 18:16:54.798280 356761 out.go:179] * Starting "kubernetes-upgrade-578123" primary control-plane node in "kubernetes-upgrade-578123" cluster
I1016 18:16:54.799287 356761 cache.go:123] Beginning downloading kic base image for docker with containerd
I1016 18:16:54.800358 356761 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
I1016 18:16:54.801279 356761 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1016 18:16:54.801319 356761 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
I1016 18:16:54.801328 356761 cache.go:58] Caching tarball of preloaded images
I1016 18:16:54.801376 356761 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
I1016 18:16:54.801409 356761 preload.go:233] Found /home/jenkins/minikube-integration/21738-136141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I1016 18:16:54.801419 356761 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
I1016 18:16:54.801507 356761 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/config.json ...
I1016 18:16:54.823981 356761 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
I1016 18:16:54.823995 356761 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
I1016 18:16:54.824010 356761 cache.go:232] Successfully downloaded all kic artifacts
I1016 18:16:54.824033 356761 start.go:360] acquireMachinesLock for kubernetes-upgrade-578123: {Name:mk58c10cb7bae6b0d8feee9f4114607d6be8bc5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:16:54.824105 356761 start.go:364] duration metric: took 54.359µs to acquireMachinesLock for "kubernetes-upgrade-578123"
I1016 18:16:54.824128 356761 start.go:96] Skipping create...Using existing machine configuration
I1016 18:16:54.824138 356761 fix.go:54] fixHost starting:
I1016 18:16:54.824348 356761 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-578123 --format={{.State.Status}}
I1016 18:16:54.843295 356761 fix.go:112] recreateIfNeeded on kubernetes-upgrade-578123: state=Running err=<nil>
W1016 18:16:54.843324 356761 fix.go:138] unexpected machine state, will restart: <nil>
I1016 18:16:54.844888 356761 out.go:252] * Updating the running docker "kubernetes-upgrade-578123" container ...
I1016 18:16:54.844935 356761 machine.go:93] provisionDockerMachine start ...
I1016 18:16:54.845020 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:16:54.863124 356761 main.go:141] libmachine: Using SSH client type: native
I1016 18:16:54.863398 356761 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33023 <nil> <nil>}
I1016 18:16:54.863416 356761 main.go:141] libmachine: About to run SSH command:
hostname
I1016 18:16:55.002843 356761 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-578123
I1016 18:16:55.002873 356761 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-578123"
I1016 18:16:55.002965 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:16:55.022681 356761 main.go:141] libmachine: Using SSH client type: native
I1016 18:16:55.022941 356761 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33023 <nil> <nil>}
I1016 18:16:55.022963 356761 main.go:141] libmachine: About to run SSH command:
sudo hostname kubernetes-upgrade-578123 && echo "kubernetes-upgrade-578123" | sudo tee /etc/hostname
I1016 18:16:55.175699 356761 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-578123
I1016 18:16:55.175790 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:16:55.196359 356761 main.go:141] libmachine: Using SSH client type: native
I1016 18:16:55.196585 356761 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33023 <nil> <nil>}
I1016 18:16:55.196610 356761 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\skubernetes-upgrade-578123' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-578123/g' /etc/hosts;
else
echo '127.0.1.1 kubernetes-upgrade-578123' | sudo tee -a /etc/hosts;
fi
fi
I1016 18:16:55.340564 356761 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1016 18:16:55.340596 356761 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-136141/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-136141/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-136141/.minikube}
I1016 18:16:55.340622 356761 ubuntu.go:190] setting up certificates
I1016 18:16:55.340635 356761 provision.go:84] configureAuth start
I1016 18:16:55.340692 356761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-578123
I1016 18:16:55.362050 356761 provision.go:143] copyHostCerts
I1016 18:16:55.362167 356761 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-136141/.minikube/cert.pem, removing ...
I1016 18:16:55.362192 356761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-136141/.minikube/cert.pem
I1016 18:16:55.362294 356761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-136141/.minikube/cert.pem (1123 bytes)
I1016 18:16:55.363163 356761 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-136141/.minikube/key.pem, removing ...
I1016 18:16:55.363187 356761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-136141/.minikube/key.pem
I1016 18:16:55.363249 356761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-136141/.minikube/key.pem (1675 bytes)
I1016 18:16:55.363360 356761 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-136141/.minikube/ca.pem, removing ...
I1016 18:16:55.363374 356761 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-136141/.minikube/ca.pem
I1016 18:16:55.363414 356761 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-136141/.minikube/ca.pem (1082 bytes)
I1016 18:16:55.363522 356761 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-136141/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-578123 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-578123 localhost minikube]
I1016 18:16:55.567546 356761 provision.go:177] copyRemoteCerts
I1016 18:16:55.567628 356761 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1016 18:16:55.567692 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:16:55.587715 356761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/kubernetes-upgrade-578123/id_rsa Username:docker}
I1016 18:16:55.689881 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1016 18:16:55.714337 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1016 18:16:55.734328 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1016 18:16:55.753513 356761 provision.go:87] duration metric: took 412.859451ms to configureAuth
I1016 18:16:55.753549 356761 ubuntu.go:206] setting minikube options for container-runtime
I1016 18:16:55.753748 356761 config.go:182] Loaded profile config "kubernetes-upgrade-578123": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1016 18:16:55.753764 356761 machine.go:96] duration metric: took 908.819911ms to provisionDockerMachine
I1016 18:16:55.753775 356761 start.go:293] postStartSetup for "kubernetes-upgrade-578123" (driver="docker")
I1016 18:16:55.753788 356761 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1016 18:16:55.753860 356761 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1016 18:16:55.753925 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:16:55.773512 356761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/kubernetes-upgrade-578123/id_rsa Username:docker}
I1016 18:16:55.876895 356761 ssh_runner.go:195] Run: cat /etc/os-release
I1016 18:16:55.881566 356761 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1016 18:16:55.881602 356761 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1016 18:16:55.881616 356761 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-136141/.minikube/addons for local assets ...
I1016 18:16:55.881675 356761 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-136141/.minikube/files for local assets ...
I1016 18:16:55.881931 356761 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-136141/.minikube/files/etc/ssl/certs/1398262.pem -> 1398262.pem in /etc/ssl/certs
I1016 18:16:55.882146 356761 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1016 18:16:55.895748 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/files/etc/ssl/certs/1398262.pem --> /etc/ssl/certs/1398262.pem (1708 bytes)
I1016 18:16:55.918509 356761 start.go:296] duration metric: took 164.71746ms for postStartSetup
I1016 18:16:55.918580 356761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1016 18:16:55.918614 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:16:55.938902 356761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/kubernetes-upgrade-578123/id_rsa Username:docker}
I1016 18:16:56.042248 356761 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1016 18:16:56.048263 356761 fix.go:56] duration metric: took 1.22411835s for fixHost
I1016 18:16:56.048294 356761 start.go:83] releasing machines lock for "kubernetes-upgrade-578123", held for 1.224174192s
I1016 18:16:56.048371 356761 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-578123
I1016 18:16:56.067405 356761 ssh_runner.go:195] Run: cat /version.json
I1016 18:16:56.067476 356761 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1016 18:16:56.067495 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:16:56.067549 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:16:56.090905 356761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/kubernetes-upgrade-578123/id_rsa Username:docker}
I1016 18:16:56.090906 356761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/kubernetes-upgrade-578123/id_rsa Username:docker}
I1016 18:16:56.275657 356761 ssh_runner.go:195] Run: systemctl --version
I1016 18:16:56.284784 356761 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1016 18:16:56.290316 356761 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1016 18:16:56.290381 356761 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1016 18:16:56.299802 356761 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1016 18:16:56.299824 356761 start.go:495] detecting cgroup driver to use...
I1016 18:16:56.299853 356761 detect.go:190] detected "systemd" cgroup driver on host os
I1016 18:16:56.299888 356761 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1016 18:16:56.314505 356761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1016 18:16:56.327606 356761 docker.go:218] disabling cri-docker service (if available) ...
I1016 18:16:56.327668 356761 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1016 18:16:56.344550 356761 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1016 18:16:56.359352 356761 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1016 18:16:56.478329 356761 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1016 18:16:56.585260 356761 docker.go:234] disabling docker service ...
I1016 18:16:56.585321 356761 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1016 18:16:56.604027 356761 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1016 18:16:56.620554 356761 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1016 18:16:56.738015 356761 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1016 18:16:56.864022 356761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1016 18:16:56.878626 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1016 18:16:56.894418 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1016 18:16:56.905051 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1016 18:16:56.915645 356761 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1016 18:16:56.915714 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1016 18:16:56.925362 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1016 18:16:56.935750 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1016 18:16:56.947290 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1016 18:16:56.957617 356761 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1016 18:16:56.966484 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1016 18:16:56.975537 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1016 18:16:56.985115 356761 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1016 18:16:56.995958 356761 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1016 18:16:57.003840 356761 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1016 18:16:57.012030 356761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1016 18:16:57.110674 356761 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1016 18:16:57.225981 356761 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1016 18:16:57.226085 356761 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1016 18:16:57.230729 356761 start.go:563] Will wait 60s for crictl version
I1016 18:16:57.230811 356761 ssh_runner.go:195] Run: which crictl
I1016 18:16:57.234939 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1016 18:16:57.261445 356761 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.28
RuntimeApiVersion: v1
I1016 18:16:57.261514 356761 ssh_runner.go:195] Run: containerd --version
I1016 18:16:57.292405 356761 ssh_runner.go:195] Run: containerd --version
I1016 18:16:57.322271 356761 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
I1016 18:16:57.323439 356761 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-578123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1016 18:16:57.340343 356761 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1016 18:16:57.344832 356761 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-578123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-578123 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1016 18:16:57.344948 356761 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1016 18:16:57.344991 356761 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 18:16:57.371410 356761 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-controller-manager:v1.34.1". assuming images are not preloaded.
I1016 18:16:57.371490 356761 ssh_runner.go:195] Run: which lz4
I1016 18:16:57.375988 356761 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1016 18:16:57.379857 356761 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
I1016 18:16:57.379878 356761 containerd.go:563] duration metric: took 3.925758ms to copy over tarball
I1016 18:16:57.379918 356761 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1016 18:16:59.827238 356761 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.447297025s)
I1016 18:16:59.827313 356761 kubeadm.go:909] preload failed, will try to load cached images: extracting tarball:
** stderr **
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
tar: Exiting with failure status due to previous errors
** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
stdout:
stderr:
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
tar: Exiting with failure status due to previous errors
I1016 18:16:59.827384 356761 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 18:16:59.852980 356761 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-controller-manager:v1.34.1". assuming images are not preloaded.
I1016 18:16:59.853011 356761 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
I1016 18:16:59.853121 356761 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:16:59.853148 356761 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
I1016 18:16:59.853154 356761 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
I1016 18:16:59.853172 356761 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
I1016 18:16:59.853128 356761 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
I1016 18:16:59.853157 356761 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
I1016 18:16:59.853129 356761 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
I1016 18:16:59.853181 356761 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
I1016 18:16:59.855200 356761 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
I1016 18:16:59.855254 356761 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
I1016 18:16:59.855286 356761 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
I1016 18:16:59.855360 356761 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
I1016 18:16:59.855367 356761 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
I1016 18:16:59.855207 356761 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
I1016 18:16:59.856039 356761 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:16:59.856572 356761 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
I1016 18:17:00.035946 356761 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
I1016 18:17:00.036012 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
I1016 18:17:00.065937 356761 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
I1016 18:17:00.065988 356761 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
I1016 18:17:00.066044 356761 ssh_runner.go:195] Run: which crictl
I1016 18:17:00.070021 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I1016 18:17:00.070539 356761 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
I1016 18:17:00.070599 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
I1016 18:17:00.087568 356761 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
I1016 18:17:00.087650 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
I1016 18:17:00.092824 356761 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
I1016 18:17:00.092902 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
I1016 18:17:00.093810 356761 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
I1016 18:17:00.093874 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
I1016 18:17:00.100669 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I1016 18:17:00.100675 356761 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
I1016 18:17:00.100783 356761 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
I1016 18:17:00.100817 356761 ssh_runner.go:195] Run: which crictl
I1016 18:17:00.122845 356761 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
I1016 18:17:00.122903 356761 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
I1016 18:17:00.122949 356761 ssh_runner.go:195] Run: which crictl
I1016 18:17:00.123221 356761 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
I1016 18:17:00.123264 356761 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
I1016 18:17:00.123314 356761 ssh_runner.go:195] Run: which crictl
I1016 18:17:00.125389 356761 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
I1016 18:17:00.125442 356761 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
I1016 18:17:00.125504 356761 ssh_runner.go:195] Run: which crictl
I1016 18:17:00.133104 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
I1016 18:17:00.133165 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1016 18:17:00.133111 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I1016 18:17:00.133203 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I1016 18:17:00.133269 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
I1016 18:17:00.140739 356761 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
I1016 18:17:00.140812 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
I1016 18:17:00.142694 356761 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
I1016 18:17:00.142764 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
I1016 18:17:01.129099 356761 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
I1016 18:17:01.129186 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:17:01.236397 356761 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1: (1.103248642s)
I1016 18:17:01.236464 356761 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
I1016 18:17:01.236484 356761 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0: (1.103249578s)
I1016 18:17:01.236524 356761 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1: (1.103313679s)
I1016 18:17:01.236561 356761 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
I1016 18:17:01.236563 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I1016 18:17:01.466573 356761 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1: (1.33327232s)
I1016 18:17:01.466623 356761 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.333423829s)
I1016 18:17:01.466649 356761 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
I1016 18:17:01.466651 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
I1016 18:17:01.466716 356761 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1: (1.325887316s)
I1016 18:17:01.466775 356761 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1: (1.323997734s)
I1016 18:17:01.466779 356761 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
I1016 18:17:01.466812 356761 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
I1016 18:17:01.466821 356761 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
I1016 18:17:01.466840 356761 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
I1016 18:17:01.466866 356761 ssh_runner.go:195] Run: which crictl
I1016 18:17:01.466875 356761 ssh_runner.go:195] Run: which crictl
I1016 18:17:01.466887 356761 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1016 18:17:01.466916 356761 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:17:01.466955 356761 ssh_runner.go:195] Run: which crictl
I1016 18:17:01.466968 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I1016 18:17:01.496413 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
I1016 18:17:01.496534 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
I1016 18:17:01.496619 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
I1016 18:17:01.524795 356761 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
I1016 18:17:01.524815 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
I1016 18:17:01.524798 356761 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
I1016 18:17:01.524835 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:17:01.525089 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
I1016 18:17:01.554440 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
I1016 18:17:01.554515 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
I1016 18:17:01.554440 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:17:01.584815 356761 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
I1016 18:17:01.584925 356761 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:17:01.584961 356761 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
I1016 18:17:01.611528 356761 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1016 18:17:01.611641 356761 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1016 18:17:01.616175 356761 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I1016 18:17:01.616199 356761 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1016 18:17:01.616248 356761 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1016 18:17:03.174356 356761 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.558073856s)
I1016 18:17:03.174401 356761 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1016 18:17:03.174453 356761 cache_images.go:93] duration metric: took 3.321425379s to LoadCachedImages
W1016 18:17:03.174534 356761 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1: no such file or directory
X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1: no such file or directory
I1016 18:17:03.174551 356761 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
I1016 18:17:03.174646 356761 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-578123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-578123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1016 18:17:03.174696 356761 ssh_runner.go:195] Run: sudo crictl info
I1016 18:17:03.211529 356761 cni.go:84] Creating CNI manager for ""
I1016 18:17:03.211561 356761 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1016 18:17:03.211587 356761 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1016 18:17:03.211619 356761 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-578123 NodeName:kubernetes-upgrade-578123 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1016 18:17:03.211776 356761 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "kubernetes-upgrade-578123"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1016 18:17:03.211851 356761 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1016 18:17:03.221608 356761 binaries.go:44] Found k8s binaries, skipping transfer
I1016 18:17:03.221678 356761 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1016 18:17:03.231005 356761 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I1016 18:17:03.245237 356761 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1016 18:17:03.260969 356761 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1016 18:17:03.276346 356761 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1016 18:17:03.281306 356761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1016 18:17:03.392039 356761 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1016 18:17:03.407730 356761 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123 for IP: 192.168.85.2
I1016 18:17:03.407751 356761 certs.go:195] generating shared ca certs ...
I1016 18:17:03.407772 356761 certs.go:227] acquiring lock for ca certs: {Name:mk7cc3421b912e6a4589d13a0cd6d944b4879005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1016 18:17:03.407904 356761 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-136141/.minikube/ca.key
I1016 18:17:03.407941 356761 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-136141/.minikube/proxy-client-ca.key
I1016 18:17:03.407951 356761 certs.go:257] generating profile certs ...
I1016 18:17:03.408031 356761 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/client.key
I1016 18:17:03.408111 356761 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/apiserver.key.f0814941
I1016 18:17:03.408162 356761 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/proxy-client.key
I1016 18:17:03.408277 356761 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/139826.pem (1338 bytes)
W1016 18:17:03.408309 356761 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-136141/.minikube/certs/139826_empty.pem, impossibly tiny 0 bytes
I1016 18:17:03.408317 356761 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca-key.pem (1675 bytes)
I1016 18:17:03.408337 356761 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem (1082 bytes)
I1016 18:17:03.408359 356761 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/cert.pem (1123 bytes)
I1016 18:17:03.408379 356761 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/key.pem (1675 bytes)
I1016 18:17:03.408415 356761 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/files/etc/ssl/certs/1398262.pem (1708 bytes)
I1016 18:17:03.409043 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1016 18:17:03.429114 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1016 18:17:03.449052 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1016 18:17:03.470179 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1016 18:17:03.491838 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1016 18:17:03.511588 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1016 18:17:03.533111 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1016 18:17:03.554693 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1016 18:17:03.577299 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1016 18:17:03.597914 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/certs/139826.pem --> /usr/share/ca-certificates/139826.pem (1338 bytes)
I1016 18:17:03.617533 356761 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/files/etc/ssl/certs/1398262.pem --> /usr/share/ca-certificates/1398262.pem (1708 bytes)
I1016 18:17:03.638035 356761 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1016 18:17:03.655142 356761 ssh_runner.go:195] Run: openssl version
I1016 18:17:03.662123 356761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1016 18:17:03.673638 356761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1016 18:17:03.680550 356761 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:45 /usr/share/ca-certificates/minikubeCA.pem
I1016 18:17:03.680618 356761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1016 18:17:03.720212 356761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1016 18:17:03.731036 356761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139826.pem && ln -fs /usr/share/ca-certificates/139826.pem /etc/ssl/certs/139826.pem"
I1016 18:17:03.742696 356761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139826.pem
I1016 18:17:03.748336 356761 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:51 /usr/share/ca-certificates/139826.pem
I1016 18:17:03.748407 356761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139826.pem
I1016 18:17:03.794672 356761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139826.pem /etc/ssl/certs/51391683.0"
I1016 18:17:03.807184 356761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1398262.pem && ln -fs /usr/share/ca-certificates/1398262.pem /etc/ssl/certs/1398262.pem"
I1016 18:17:03.820296 356761 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1398262.pem
I1016 18:17:03.826181 356761 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:51 /usr/share/ca-certificates/1398262.pem
I1016 18:17:03.826252 356761 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1398262.pem
I1016 18:17:03.871275 356761 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1398262.pem /etc/ssl/certs/3ec20f2e.0"
I1016 18:17:03.882577 356761 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1016 18:17:03.887625 356761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1016 18:17:03.940572 356761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1016 18:17:03.994318 356761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1016 18:17:04.039632 356761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1016 18:17:04.078785 356761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1016 18:17:04.116748 356761 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1016 18:17:04.156698 356761 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-578123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-578123 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1016 18:17:04.156794 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1016 18:17:04.156851 356761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1016 18:17:04.190135 356761 cri.go:89] found id: ""
I1016 18:17:04.190208 356761 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1016 18:17:04.199002 356761 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I1016 18:17:04.199025 356761 kubeadm.go:597] restartPrimaryControlPlane start ...
I1016 18:17:04.199143 356761 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1016 18:17:04.207437 356761 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1016 18:17:04.208195 356761 kubeconfig.go:125] found "kubernetes-upgrade-578123" server: "https://192.168.85.2:8443"
I1016 18:17:04.209209 356761 kapi.go:59] client config for kubernetes-upgrade-578123: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/client.key", CAFile:"/home/jenkins/minikube-integration/21738-136141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1016 18:17:04.209660 356761 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1016 18:17:04.209674 356761 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1016 18:17:04.209678 356761 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1016 18:17:04.209684 356761 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1016 18:17:04.209697 356761 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1016 18:17:04.210100 356761 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1016 18:17:04.218879 356761 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
I1016 18:17:04.218923 356761 kubeadm.go:601] duration metric: took 19.890097ms to restartPrimaryControlPlane
I1016 18:17:04.218938 356761 kubeadm.go:402] duration metric: took 62.252572ms to StartCluster
I1016 18:17:04.218977 356761 settings.go:142] acquiring lock: {Name:mk69e9fda206cb3246d193be5125ea7b81edb7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1016 18:17:04.219051 356761 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21738-136141/kubeconfig
I1016 18:17:04.220367 356761 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-136141/kubeconfig: {Name:mkcd7be31e9131e009fd8c01dbeba0d9b0a559bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1016 18:17:04.220637 356761 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1016 18:17:04.220748 356761 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1016 18:17:04.220851 356761 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-578123"
I1016 18:17:04.220864 356761 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-578123"
I1016 18:17:04.220897 356761 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-578123"
I1016 18:17:04.220872 356761 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-578123"
W1016 18:17:04.220927 356761 addons.go:247] addon storage-provisioner should already be in state true
I1016 18:17:04.220959 356761 host.go:66] Checking if "kubernetes-upgrade-578123" exists ...
I1016 18:17:04.220899 356761 config.go:182] Loaded profile config "kubernetes-upgrade-578123": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1016 18:17:04.221333 356761 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-578123 --format={{.State.Status}}
I1016 18:17:04.221528 356761 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-578123 --format={{.State.Status}}
I1016 18:17:04.223545 356761 out.go:179] * Verifying Kubernetes components...
I1016 18:17:04.224831 356761 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1016 18:17:04.245035 356761 kapi.go:59] client config for kubernetes-upgrade-578123: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-136141/.minikube/profiles/kubernetes-upgrade-578123/client.key", CAFile:"/home/jenkins/minikube-integration/21738-136141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1016 18:17:04.245410 356761 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-578123"
W1016 18:17:04.245431 356761 addons.go:247] addon default-storageclass should already be in state true
I1016 18:17:04.245457 356761 host.go:66] Checking if "kubernetes-upgrade-578123" exists ...
I1016 18:17:04.245775 356761 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-578123 --format={{.State.Status}}
I1016 18:17:04.246788 356761 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:17:04.248143 356761 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1016 18:17:04.248159 356761 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1016 18:17:04.248209 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:17:04.268995 356761 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I1016 18:17:04.269025 356761 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1016 18:17:04.269118 356761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578123
I1016 18:17:04.269713 356761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/kubernetes-upgrade-578123/id_rsa Username:docker}
I1016 18:17:04.294012 356761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/kubernetes-upgrade-578123/id_rsa Username:docker}
I1016 18:17:04.341041 356761 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1016 18:17:04.355209 356761 api_server.go:52] waiting for apiserver process to appear ...
I1016 18:17:04.355281 356761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1016 18:17:04.367080 356761 api_server.go:72] duration metric: took 146.373781ms to wait for apiserver process to appear ...
I1016 18:17:04.367112 356761 api_server.go:88] waiting for apiserver healthz status ...
I1016 18:17:04.367134 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:04.382618 356761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1016 18:17:04.403364 356761 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1016 18:17:06.373148 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:06.373202 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:06.373228 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:08.379624 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:08.379654 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:08.379669 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:10.385621 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:10.385671 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:10.385704 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:12.391846 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:12.391900 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:12.391920 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:14.399109 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:14.399144 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:14.399171 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:16.406543 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:16.406582 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:16.406604 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:18.413419 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:18.413456 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:18.413484 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:20.422816 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:20.422855 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:20.422877 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:22.428589 356761 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:17:22.428632 356761 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:17:22.428654 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:27.430345 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:17:27.430388 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:32.432295 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:17:32.432347 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:37.434508 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:17:37.434547 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:42.437282 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:17:42.437320 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:47.439425 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:17:47.439478 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:52.441320 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:17:52.441364 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:17:57.443352 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:17:57.443403 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:18:02.445331 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:18:02.445374 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:18:07.447261 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:18:07.447329 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1016 18:18:07.447389 356761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1016 18:22:04.648621 356761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5m0.265957337s)
I1016 18:22:04.648732 356761 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5m0.245331345s)
I1016 18:22:04.648785 356761 ssh_runner.go:235] Completed: sudo crictl ps -a --quiet --name=kube-apiserver: (3m57.201363169s)
W1016 18:22:04.648787 356761 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
Name: "standard", Namespace: ""
from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
I1016 18:22:04.648828 356761 cri.go:89] found id: "6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75"
W1016 18:22:04.648692 356761 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "storage-provisioner", Namespace: ""
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
I1016 18:22:04.648840 356761 cri.go:89] found id: ""
I1016 18:22:04.648854 356761 logs.go:282] 1 containers: [6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75]
W1016 18:22:04.648939 356761 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
Name: "standard", Namespace: ""
from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
]
! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
Name: "standard", Namespace: ""
from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
]
W1016 18:22:04.649033 356761 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "storage-provisioner", Namespace: ""
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
]
! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "storage-provisioner", Namespace: ""
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
]
I1016 18:22:04.649642 356761 ssh_runner.go:195] Run: which crictl
I1016 18:22:04.651092 356761 out.go:179] * Enabled addons:
I1016 18:22:04.652149 356761 addons.go:514] duration metric: took 5m0.431399466s for enable addons: enabled=[]
I1016 18:22:04.656214 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1016 18:22:04.656283 356761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1016 18:22:04.709840 356761 cri.go:89] found id: "b8f0d1ecab4abecf85aafc218ec23ecab7d4e25e0b32eea84aa44415f6e25038"
I1016 18:22:04.709870 356761 cri.go:89] found id: ""
I1016 18:22:04.709880 356761 logs.go:282] 1 containers: [b8f0d1ecab4abecf85aafc218ec23ecab7d4e25e0b32eea84aa44415f6e25038]
I1016 18:22:04.709940 356761 ssh_runner.go:195] Run: which crictl
I1016 18:22:04.721861 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1016 18:22:04.721940 356761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1016 18:22:04.773684 356761 cri.go:89] found id: ""
I1016 18:22:04.773732 356761 logs.go:282] 0 containers: []
W1016 18:22:04.773743 356761 logs.go:284] No container was found matching "coredns"
I1016 18:22:04.773751 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1016 18:22:04.773817 356761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1016 18:22:04.817898 356761 cri.go:89] found id: "97444e7e2cceff8652c2a875cf9a9bd355842660ee69fa42f94c3dff20935639"
I1016 18:22:04.817935 356761 cri.go:89] found id: ""
I1016 18:22:04.817945 356761 logs.go:282] 1 containers: [97444e7e2cceff8652c2a875cf9a9bd355842660ee69fa42f94c3dff20935639]
I1016 18:22:04.818157 356761 ssh_runner.go:195] Run: which crictl
I1016 18:22:04.823692 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1016 18:22:04.823763 356761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1016 18:22:04.866095 356761 cri.go:89] found id: ""
I1016 18:22:04.866127 356761 logs.go:282] 0 containers: []
W1016 18:22:04.866139 356761 logs.go:284] No container was found matching "kube-proxy"
I1016 18:22:04.866148 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1016 18:22:04.866212 356761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1016 18:22:04.922998 356761 cri.go:89] found id: "2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046"
I1016 18:22:04.923268 356761 cri.go:89] found id: ""
I1016 18:22:04.923326 356761 logs.go:282] 1 containers: [2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046]
I1016 18:22:04.923444 356761 ssh_runner.go:195] Run: which crictl
I1016 18:22:04.930755 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1016 18:22:04.930828 356761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1016 18:22:04.972427 356761 cri.go:89] found id: ""
I1016 18:22:04.972452 356761 logs.go:282] 0 containers: []
W1016 18:22:04.972574 356761 logs.go:284] No container was found matching "kindnet"
I1016 18:22:04.972583 356761 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1016 18:22:04.972682 356761 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1016 18:22:05.037206 356761 cri.go:89] found id: ""
I1016 18:22:05.037315 356761 logs.go:282] 0 containers: []
W1016 18:22:05.037338 356761 logs.go:284] No container was found matching "storage-provisioner"
I1016 18:22:05.037400 356761 logs.go:123] Gathering logs for kubelet ...
I1016 18:22:05.037433 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1016 18:22:05.109367 356761 logs.go:138] Found kubelet problem: Oct 16 18:16:51 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:16:51.284569 1162 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-578123\" is forbidden: User \"system:node:kubernetes-upgrade-578123\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-578123' and this object" podUID="a2102c07287d0f6b7388f60c0ed4e090" pod="kube-system/etcd-kubernetes-upgrade-578123"
I1016 18:22:05.176133 356761 logs.go:123] Gathering logs for dmesg ...
I1016 18:22:05.176175 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1016 18:22:05.195218 356761 logs.go:123] Gathering logs for describe nodes ...
I1016 18:22:05.195251 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1016 18:23:05.285489 356761 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.090207428s)
W1016 18:23:05.285559 356761 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
output:
** stderr **
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
** /stderr **
I1016 18:23:05.285576 356761 logs.go:123] Gathering logs for kube-apiserver [6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75] ...
I1016 18:23:05.285589 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75"
W1016 18:23:05.314338 356761 logs.go:130] failed kube-apiserver [6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75": Process exited with status 1
stdout:
stderr:
E1016 18:23:05.311090 3587 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75\": not found" containerID="6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75"
time="2025-10-16T18:23:05Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75\": not found"
output:
** stderr **
E1016 18:23:05.311090 3587 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75\": not found" containerID="6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75"
time="2025-10-16T18:23:05Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75\": not found"
** /stderr **
I1016 18:23:05.314364 356761 logs.go:123] Gathering logs for kube-scheduler [97444e7e2cceff8652c2a875cf9a9bd355842660ee69fa42f94c3dff20935639] ...
I1016 18:23:05.314388 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 97444e7e2cceff8652c2a875cf9a9bd355842660ee69fa42f94c3dff20935639"
I1016 18:23:05.344624 356761 logs.go:123] Gathering logs for containerd ...
I1016 18:23:05.344663 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1016 18:23:05.423309 356761 logs.go:123] Gathering logs for container status ...
I1016 18:23:05.423352 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1016 18:23:05.455973 356761 logs.go:123] Gathering logs for etcd [b8f0d1ecab4abecf85aafc218ec23ecab7d4e25e0b32eea84aa44415f6e25038] ...
I1016 18:23:05.456019 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b8f0d1ecab4abecf85aafc218ec23ecab7d4e25e0b32eea84aa44415f6e25038"
I1016 18:23:05.493227 356761 logs.go:123] Gathering logs for kube-controller-manager [2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046] ...
I1016 18:23:05.493265 356761 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046"
I1016 18:23:05.523705 356761 out.go:374] Setting ErrFile to fd 2...
I1016 18:23:05.523731 356761 out.go:408] TERM=,COLORTERM=, which probably does not support color
W1016 18:23:05.523800 356761 out.go:285] X Problems detected in kubelet:
X Problems detected in kubelet:
W1016 18:23:05.523815 356761 out.go:285] Oct 16 18:16:51 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:16:51.284569 1162 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-578123\" is forbidden: User \"system:node:kubernetes-upgrade-578123\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-578123' and this object" podUID="a2102c07287d0f6b7388f60c0ed4e090" pod="kube-system/etcd-kubernetes-upgrade-578123"
Oct 16 18:16:51 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:16:51.284569 1162 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-578123\" is forbidden: User \"system:node:kubernetes-upgrade-578123\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-578123' and this object" podUID="a2102c07287d0f6b7388f60c0ed4e090" pod="kube-system/etcd-kubernetes-upgrade-578123"
I1016 18:23:05.523828 356761 out.go:374] Setting ErrFile to fd 2...
I1016 18:23:05.523841 356761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:23:15.526206 356761 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1016 18:23:20.527276 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:23:20.529593 356761 out.go:203]
W1016 18:23:20.530695 356761 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
W1016 18:23:20.530717 356761 out.go:285] *
*
W1016 18:23:20.533267 356761 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1016 18:23:20.534187 356761 out.go:203]
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-578123 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-10-16 18:23:20.586770792 +0000 UTC m=+2344.355189229
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect kubernetes-upgrade-578123
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-578123:
-- stdout --
[
{
"Id": "49c1831e979e790efefee146c70ea9c2bd502fccd11fc8f9f539be892516af04",
"Created": "2025-10-16T18:16:12.154201211Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 350695,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-10-16T18:16:31.150356008Z",
"FinishedAt": "2025-10-16T18:16:30.013521706Z"
},
"Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
"ResolvConfPath": "/var/lib/docker/containers/49c1831e979e790efefee146c70ea9c2bd502fccd11fc8f9f539be892516af04/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/49c1831e979e790efefee146c70ea9c2bd502fccd11fc8f9f539be892516af04/hostname",
"HostsPath": "/var/lib/docker/containers/49c1831e979e790efefee146c70ea9c2bd502fccd11fc8f9f539be892516af04/hosts",
"LogPath": "/var/lib/docker/containers/49c1831e979e790efefee146c70ea9c2bd502fccd11fc8f9f539be892516af04/49c1831e979e790efefee146c70ea9c2bd502fccd11fc8f9f539be892516af04-json.log",
"Name": "/kubernetes-upgrade-578123",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"kubernetes-upgrade-578123:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "kubernetes-upgrade-578123",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "49c1831e979e790efefee146c70ea9c2bd502fccd11fc8f9f539be892516af04",
"LowerDir": "/var/lib/docker/overlay2/ce2aa1cab3a04bc58b116ca87e2b325490847d5f1f884e530197304a8bcdd7de-init/diff:/var/lib/docker/overlay2/726328294f22ee0557ec9aa7e97ebbdd864df8d3e80d1b05f69bb2be05b0536c/diff",
"MergedDir": "/var/lib/docker/overlay2/ce2aa1cab3a04bc58b116ca87e2b325490847d5f1f884e530197304a8bcdd7de/merged",
"UpperDir": "/var/lib/docker/overlay2/ce2aa1cab3a04bc58b116ca87e2b325490847d5f1f884e530197304a8bcdd7de/diff",
"WorkDir": "/var/lib/docker/overlay2/ce2aa1cab3a04bc58b116ca87e2b325490847d5f1f884e530197304a8bcdd7de/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "kubernetes-upgrade-578123",
"Source": "/var/lib/docker/volumes/kubernetes-upgrade-578123/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "kubernetes-upgrade-578123",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "kubernetes-upgrade-578123",
"name.minikube.sigs.k8s.io": "kubernetes-upgrade-578123",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "8d3aca8ab161b884f70036a05bdda355dcc3a27205bcd1e35274f5f4d5aaa143",
"SandboxKey": "/var/run/docker/netns/8d3aca8ab161",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33023"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33024"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33027"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33025"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33026"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"kubernetes-upgrade-578123": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "26:96:64:63:f8:86",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "9c991a797aa01491aa0955ccae120ceaa1c286645ad7a94e193b4e507f6c052f",
"EndpointID": "4ef607729596b12f16a83967ef883b04e7bf19cd948c05d510789995f0d2e27e",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"kubernetes-upgrade-578123",
"49c1831e979e"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-578123 -n kubernetes-upgrade-578123
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-578123 -n kubernetes-upgrade-578123: exit status 2 (15.822877759s)
-- stdout --
Running
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-578123 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-578123 logs -n 25: (1m0.924300292s)
helpers_test.go:260: TestKubernetesUpgrade logs:
-- stdout --
==> Audit <==
┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
│ ssh │ -p calico-343710 sudo cat /var/lib/kubelet/config.yaml │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo systemctl status docker --all --full --no-pager │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ │
│ ssh │ -p calico-343710 sudo systemctl cat docker --no-pager │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo cat /etc/docker/daemon.json │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ │
│ ssh │ -p calico-343710 sudo docker system info │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ │
│ ssh │ -p calico-343710 sudo systemctl status cri-docker --all --full --no-pager │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ │
│ ssh │ -p calico-343710 sudo systemctl cat cri-docker --no-pager │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ │
│ ssh │ -p calico-343710 sudo cat /usr/lib/systemd/system/cri-docker.service │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo cri-dockerd --version │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo systemctl status containerd --all --full --no-pager │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo systemctl cat containerd --no-pager │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ start │ -p old-k8s-version-570485 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-570485 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ │
│ ssh │ -p calico-343710 sudo cat /etc/containerd/config.toml │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo containerd config dump │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo systemctl status crio --all --full --no-pager │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ │
│ ssh │ -p calico-343710 sudo systemctl cat crio --no-pager │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ ssh │ -p calico-343710 sudo crio config │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ delete │ -p calico-343710 │ calico-343710 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
│ start │ -p embed-certs-666387 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.34.1 │ embed-certs-666387 │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:23 UTC │
│ addons │ enable metrics-server -p no-preload-810937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ no-preload-810937 │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
│ stop │ -p no-preload-810937 --alsologtostderr -v=3 │ no-preload-810937 │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
│ addons │ enable dashboard -p no-preload-810937 --images=MetricsScraper=registry.k8s.io/echoserver:1.4 │ no-preload-810937 │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
│ start │ -p no-preload-810937 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.34.1 │ no-preload-810937 │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ │
└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
==> Last Start <==
Log file created at: 2025/10/16 18:23:20
Running on machine: ubuntu-20-agent-10
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1016 18:23:20.020484 455489 out.go:360] Setting OutFile to fd 1 ...
I1016 18:23:20.020786 455489 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:23:20.020797 455489 out.go:374] Setting ErrFile to fd 2...
I1016 18:23:20.020802 455489 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 18:23:20.020986 455489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-136141/.minikube/bin
I1016 18:23:20.021490 455489 out.go:368] Setting JSON to false
I1016 18:23:20.022857 455489 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3935,"bootTime":1760635065,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1016 18:23:20.022953 455489 start.go:141] virtualization: kvm guest
I1016 18:23:20.024919 455489 out.go:179] * [no-preload-810937] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1016 18:23:20.026207 455489 out.go:179] - MINIKUBE_LOCATION=21738
I1016 18:23:20.026243 455489 notify.go:220] Checking for updates...
I1016 18:23:20.028210 455489 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1016 18:23:20.029298 455489 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21738-136141/kubeconfig
I1016 18:23:20.030312 455489 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-136141/.minikube
I1016 18:23:20.031256 455489 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1016 18:23:20.032286 455489 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1016 18:23:20.033707 455489 config.go:182] Loaded profile config "no-preload-810937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1016 18:23:20.034243 455489 driver.go:421] Setting default libvirt URI to qemu:///system
I1016 18:23:20.059326 455489 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
I1016 18:23:20.059415 455489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1016 18:23:20.123589 455489 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:23:20.112043838 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1016 18:23:20.123707 455489 docker.go:318] overlay module found
I1016 18:23:20.126002 455489 out.go:179] * Using the docker driver based on existing profile
I1016 18:23:20.127073 455489 start.go:305] selected driver: docker
I1016 18:23:20.127088 455489 start.go:925] validating driver "docker" against &{Name:no-preload-810937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-810937 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1016 18:23:20.127177 455489 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1016 18:23:20.127751 455489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1016 18:23:20.189375 455489 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:23:20.177794219 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1016 18:23:20.189699 455489 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1016 18:23:20.189729 455489 cni.go:84] Creating CNI manager for ""
I1016 18:23:20.189778 455489 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1016 18:23:20.189823 455489 start.go:349] cluster config:
{Name:no-preload-810937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-810937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1016 18:23:20.191484 455489 out.go:179] * Starting "no-preload-810937" primary control-plane node in "no-preload-810937" cluster
I1016 18:23:20.192498 455489 cache.go:123] Beginning downloading kic base image for docker with containerd
I1016 18:23:20.193597 455489 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
I1016 18:23:20.194499 455489 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1016 18:23:20.194599 455489 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
I1016 18:23:20.194639 455489 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/config.json ...
I1016 18:23:20.194933 455489 cache.go:107] acquiring lock: {Name:mked92c8f8b317e17c170ee3a027b2d9132a32db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.194936 455489 cache.go:107] acquiring lock: {Name:mk5419291eb561b086b1de9aba5e6720aa185e5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.194937 455489 cache.go:107] acquiring lock: {Name:mk3048bba5ea1bd44acae4988ad901b1a7776ef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.194936 455489 cache.go:107] acquiring lock: {Name:mk29f490c5d38ab2bfd70bd4fb946469606e5127 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.195018 455489 cache.go:107] acquiring lock: {Name:mkd38e7a38639ba1c589a72c4e114d4477612982 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.195026 455489 cache.go:107] acquiring lock: {Name:mkaf2d7b31bf9030b4caa366350cbc6f0e422423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.195093 455489 cache.go:115] /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
I1016 18:23:20.195095 455489 cache.go:107] acquiring lock: {Name:mk0c9e1ae08b3d696454ac9e1812d2c668ad2991 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.195066 455489 cache.go:107] acquiring lock: {Name:mk68c4caec532dcdb5624743d5add752a9ce40c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.195120 455489 cache.go:115] /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
I1016 18:23:20.195141 455489 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 117.824µs
I1016 18:23:20.195153 455489 cache.go:115] /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
I1016 18:23:20.195165 455489 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
I1016 18:23:20.195148 455489 cache.go:115] /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
I1016 18:23:20.195165 455489 cache.go:115] /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
I1016 18:23:20.195174 455489 cache.go:115] /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
I1016 18:23:20.195174 455489 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 247.916µs
I1016 18:23:20.195110 455489 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 198.281µs
I1016 18:23:20.195188 455489 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
I1016 18:23:20.195186 455489 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 171.541µs
I1016 18:23:20.195117 455489 cache.go:115] /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1016 18:23:20.195198 455489 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
I1016 18:23:20.195208 455489 cache.go:115] /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
I1016 18:23:20.195216 455489 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 124.314µs
I1016 18:23:20.195224 455489 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
I1016 18:23:20.195212 455489 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 287.721µs
I1016 18:23:20.195238 455489 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1016 18:23:20.195192 455489 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
I1016 18:23:20.195178 455489 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 262.069µs
I1016 18:23:20.195299 455489 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
I1016 18:23:20.195183 455489 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 130.171µs
I1016 18:23:20.195343 455489 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21738-136141/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
I1016 18:23:20.195356 455489 cache.go:87] Successfully saved all images to host disk.
I1016 18:23:20.215840 455489 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
I1016 18:23:20.215858 455489 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
I1016 18:23:20.215873 455489 cache.go:232] Successfully downloaded all kic artifacts
I1016 18:23:20.215897 455489 start.go:360] acquireMachinesLock for no-preload-810937: {Name:mk7e1695d08d9283fff287421d2a632e5e9aa934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:23:20.215946 455489 start.go:364] duration metric: took 35.933µs to acquireMachinesLock for "no-preload-810937"
I1016 18:23:20.215963 455489 start.go:96] Skipping create...Using existing machine configuration
I1016 18:23:20.215984 455489 fix.go:54] fixHost starting:
I1016 18:23:20.216204 455489 cli_runner.go:164] Run: docker container inspect no-preload-810937 --format={{.State.Status}}
I1016 18:23:20.233828 455489 fix.go:112] recreateIfNeeded on no-preload-810937: state=Stopped err=<nil>
W1016 18:23:20.233862 455489 fix.go:138] unexpected machine state, will restart: <nil>
I1016 18:23:17.837155 449077 addons.go:514] duration metric: took 511.695971ms for enable addons: enabled=[storage-provisioner default-storageclass]
I1016 18:23:18.116619 449077 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-666387" context rescaled to 1 replicas
W1016 18:23:19.616662 449077 node_ready.go:57] node "embed-certs-666387" has "Ready":"False" status (will retry)
I1016 18:23:20.527276 356761 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1016 18:23:20.529593 356761 out.go:203]
W1016 18:23:20.530695 356761 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
W1016 18:23:20.530717 356761 out.go:285] *
W1016 18:23:20.533267 356761 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1016 18:23:20.534187 356761 out.go:203]
W1016 18:23:15.988360 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
W1016 18:23:18.486539 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
W1016 18:23:20.486590 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
I1016 18:23:20.236347 455489 out.go:252] * Restarting existing docker container for "no-preload-810937" ...
I1016 18:23:20.236432 455489 cli_runner.go:164] Run: docker start no-preload-810937
I1016 18:23:20.487816 455489 cli_runner.go:164] Run: docker container inspect no-preload-810937 --format={{.State.Status}}
I1016 18:23:20.507977 455489 kic.go:430] container "no-preload-810937" state is running.
I1016 18:23:20.508541 455489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-810937
I1016 18:23:20.527567 455489 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/config.json ...
I1016 18:23:20.527841 455489 machine.go:93] provisionDockerMachine start ...
I1016 18:23:20.527912 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:20.552213 455489 main.go:141] libmachine: Using SSH client type: native
I1016 18:23:20.552555 455489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33113 <nil> <nil>}
I1016 18:23:20.552578 455489 main.go:141] libmachine: About to run SSH command:
hostname
I1016 18:23:20.553345 455489 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56376->127.0.0.1:33113: read: connection reset by peer
I1016 18:23:23.700725 455489 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-810937
I1016 18:23:23.700762 455489 ubuntu.go:182] provisioning hostname "no-preload-810937"
I1016 18:23:23.700832 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:23.720298 455489 main.go:141] libmachine: Using SSH client type: native
I1016 18:23:23.720569 455489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33113 <nil> <nil>}
I1016 18:23:23.720585 455489 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-810937 && echo "no-preload-810937" | sudo tee /etc/hostname
I1016 18:23:23.878236 455489 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-810937
I1016 18:23:23.878340 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:23.897634 455489 main.go:141] libmachine: Using SSH client type: native
I1016 18:23:23.897908 455489 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33113 <nil> <nil>}
I1016 18:23:23.897927 455489 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-810937' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-810937/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-810937' | sudo tee -a /etc/hosts;
fi
fi
I1016 18:23:24.039494 455489 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1016 18:23:24.039538 455489 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-136141/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-136141/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-136141/.minikube}
I1016 18:23:24.039567 455489 ubuntu.go:190] setting up certificates
I1016 18:23:24.039580 455489 provision.go:84] configureAuth start
I1016 18:23:24.039643 455489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-810937
I1016 18:23:24.059971 455489 provision.go:143] copyHostCerts
I1016 18:23:24.060075 455489 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-136141/.minikube/key.pem, removing ...
I1016 18:23:24.060098 455489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-136141/.minikube/key.pem
I1016 18:23:24.060181 455489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-136141/.minikube/key.pem (1675 bytes)
I1016 18:23:24.060317 455489 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-136141/.minikube/ca.pem, removing ...
I1016 18:23:24.060330 455489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-136141/.minikube/ca.pem
I1016 18:23:24.060374 455489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-136141/.minikube/ca.pem (1082 bytes)
I1016 18:23:24.060460 455489 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-136141/.minikube/cert.pem, removing ...
I1016 18:23:24.060471 455489 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-136141/.minikube/cert.pem
I1016 18:23:24.060506 455489 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-136141/.minikube/cert.pem (1123 bytes)
I1016 18:23:24.060658 455489 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-136141/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca-key.pem org=jenkins.no-preload-810937 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-810937]
I1016 18:23:24.275259 455489 provision.go:177] copyRemoteCerts
I1016 18:23:24.275329 455489 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1016 18:23:24.275371 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:24.292902 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:24.393634 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1016 18:23:24.412161 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1016 18:23:24.431084 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1016 18:23:24.451495 455489 provision.go:87] duration metric: took 411.892171ms to configureAuth
I1016 18:23:24.451527 455489 ubuntu.go:206] setting minikube options for container-runtime
I1016 18:23:24.451726 455489 config.go:182] Loaded profile config "no-preload-810937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1016 18:23:24.451744 455489 machine.go:96] duration metric: took 3.923888413s to provisionDockerMachine
I1016 18:23:24.451755 455489 start.go:293] postStartSetup for "no-preload-810937" (driver="docker")
I1016 18:23:24.451766 455489 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1016 18:23:24.451812 455489 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1016 18:23:24.451858 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:24.469941 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:24.570282 455489 ssh_runner.go:195] Run: cat /etc/os-release
I1016 18:23:24.574098 455489 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1016 18:23:24.574130 455489 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1016 18:23:24.574142 455489 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-136141/.minikube/addons for local assets ...
I1016 18:23:24.574203 455489 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-136141/.minikube/files for local assets ...
I1016 18:23:24.574311 455489 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-136141/.minikube/files/etc/ssl/certs/1398262.pem -> 1398262.pem in /etc/ssl/certs
I1016 18:23:24.574447 455489 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1016 18:23:24.582313 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/files/etc/ssl/certs/1398262.pem --> /etc/ssl/certs/1398262.pem (1708 bytes)
I1016 18:23:24.600688 455489 start.go:296] duration metric: took 148.912087ms for postStartSetup
I1016 18:23:24.600772 455489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1016 18:23:24.600816 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:24.618815 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:24.714960 455489 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1016 18:23:24.719845 455489 fix.go:56] duration metric: took 4.503850373s for fixHost
I1016 18:23:24.719871 455489 start.go:83] releasing machines lock for "no-preload-810937", held for 4.50391446s
I1016 18:23:24.719945 455489 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-810937
I1016 18:23:24.737284 455489 ssh_runner.go:195] Run: cat /version.json
I1016 18:23:24.737343 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:24.737376 455489 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1016 18:23:24.737469 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:24.756403 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:24.757067 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:24.906123 455489 ssh_runner.go:195] Run: systemctl --version
I1016 18:23:24.913073 455489 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1016 18:23:24.917716 455489 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1016 18:23:24.917762 455489 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1016 18:23:24.925894 455489 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1016 18:23:24.925913 455489 start.go:495] detecting cgroup driver to use...
I1016 18:23:24.925945 455489 detect.go:190] detected "systemd" cgroup driver on host os
I1016 18:23:24.925986 455489 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1016 18:23:24.942111 455489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1016 18:23:24.954521 455489 docker.go:218] disabling cri-docker service (if available) ...
I1016 18:23:24.954574 455489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1016 18:23:24.968265 455489 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1016 18:23:24.980859 455489 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
W1016 18:23:22.116608 449077 node_ready.go:57] node "embed-certs-666387" has "Ready":"False" status (will retry)
W1016 18:23:24.116825 449077 node_ready.go:57] node "embed-certs-666387" has "Ready":"False" status (will retry)
W1016 18:23:22.486712 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
W1016 18:23:24.987518 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
I1016 18:23:25.060901 455489 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1016 18:23:25.148432 455489 docker.go:234] disabling docker service ...
I1016 18:23:25.148526 455489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1016 18:23:25.166977 455489 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1016 18:23:25.180962 455489 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1016 18:23:25.274662 455489 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1016 18:23:25.354994 455489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1016 18:23:25.367862 455489 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1016 18:23:25.382714 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1016 18:23:25.391841 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1016 18:23:25.400536 455489 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1016 18:23:25.400595 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1016 18:23:25.408952 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1016 18:23:25.417585 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1016 18:23:25.425612 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1016 18:23:25.433922 455489 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1016 18:23:25.441926 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1016 18:23:25.450285 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1016 18:23:25.459037 455489 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1016 18:23:25.467920 455489 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1016 18:23:25.475163 455489 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1016 18:23:25.482554 455489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1016 18:23:25.558459 455489 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1016 18:23:25.658482 455489 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1016 18:23:25.658558 455489 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1016 18:23:25.663243 455489 start.go:563] Will wait 60s for crictl version
I1016 18:23:25.663316 455489 ssh_runner.go:195] Run: which crictl
I1016 18:23:25.666942 455489 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1016 18:23:25.691458 455489 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.28
RuntimeApiVersion: v1
I1016 18:23:25.691522 455489 ssh_runner.go:195] Run: containerd --version
I1016 18:23:25.716249 455489 ssh_runner.go:195] Run: containerd --version
I1016 18:23:25.743913 455489 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
I1016 18:23:25.745048 455489 cli_runner.go:164] Run: docker network inspect no-preload-810937 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1016 18:23:25.761973 455489 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1016 18:23:25.766585 455489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1016 18:23:25.777279 455489 kubeadm.go:883] updating cluster {Name:no-preload-810937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-810937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1016 18:23:25.777420 455489 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1016 18:23:25.777473 455489 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 18:23:25.804413 455489 containerd.go:627] all images are preloaded for containerd runtime.
I1016 18:23:25.804434 455489 cache_images.go:85] Images are preloaded, skipping loading
I1016 18:23:25.804457 455489 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
I1016 18:23:25.804556 455489 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-810937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:no-preload-810937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1016 18:23:25.804605 455489 ssh_runner.go:195] Run: sudo crictl info
I1016 18:23:25.832525 455489 cni.go:84] Creating CNI manager for ""
I1016 18:23:25.832546 455489 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1016 18:23:25.832565 455489 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1016 18:23:25.832650 455489 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-810937 NodeName:no-preload-810937 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1016 18:23:25.832780 455489 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-810937"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1016 18:23:25.832848 455489 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1016 18:23:25.841177 455489 binaries.go:44] Found k8s binaries, skipping transfer
I1016 18:23:25.841248 455489 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1016 18:23:25.848889 455489 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I1016 18:23:25.861380 455489 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1016 18:23:25.874703 455489 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
I1016 18:23:25.887794 455489 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1016 18:23:25.891421 455489 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1016 18:23:25.901087 455489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1016 18:23:25.981684 455489 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1016 18:23:26.007328 455489 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937 for IP: 192.168.76.2
I1016 18:23:26.007347 455489 certs.go:195] generating shared ca certs ...
I1016 18:23:26.007362 455489 certs.go:227] acquiring lock for ca certs: {Name:mk7cc3421b912e6a4589d13a0cd6d944b4879005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1016 18:23:26.007500 455489 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-136141/.minikube/ca.key
I1016 18:23:26.007540 455489 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-136141/.minikube/proxy-client-ca.key
I1016 18:23:26.007550 455489 certs.go:257] generating profile certs ...
I1016 18:23:26.007624 455489 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/client.key
I1016 18:23:26.007675 455489 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/apiserver.key.d6ae10ef
I1016 18:23:26.007710 455489 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/proxy-client.key
I1016 18:23:26.007805 455489 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/139826.pem (1338 bytes)
W1016 18:23:26.007834 455489 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-136141/.minikube/certs/139826_empty.pem, impossibly tiny 0 bytes
I1016 18:23:26.007843 455489 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca-key.pem (1675 bytes)
I1016 18:23:26.007863 455489 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/ca.pem (1082 bytes)
I1016 18:23:26.007884 455489 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/cert.pem (1123 bytes)
I1016 18:23:26.007906 455489 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/certs/key.pem (1675 bytes)
I1016 18:23:26.007941 455489 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-136141/.minikube/files/etc/ssl/certs/1398262.pem (1708 bytes)
I1016 18:23:26.008540 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1016 18:23:26.027541 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1016 18:23:26.046953 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1016 18:23:26.066472 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1016 18:23:26.089208 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1016 18:23:26.111981 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1016 18:23:26.133485 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1016 18:23:26.153003 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/profiles/no-preload-810937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1016 18:23:26.172326 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/files/etc/ssl/certs/1398262.pem --> /usr/share/ca-certificates/1398262.pem (1708 bytes)
I1016 18:23:26.190680 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1016 18:23:26.210427 455489 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-136141/.minikube/certs/139826.pem --> /usr/share/ca-certificates/139826.pem (1338 bytes)
I1016 18:23:26.229996 455489 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1016 18:23:26.242617 455489 ssh_runner.go:195] Run: openssl version
I1016 18:23:26.249070 455489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1016 18:23:26.257789 455489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1016 18:23:26.261534 455489 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:45 /usr/share/ca-certificates/minikubeCA.pem
I1016 18:23:26.261574 455489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1016 18:23:26.296212 455489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1016 18:23:26.304218 455489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139826.pem && ln -fs /usr/share/ca-certificates/139826.pem /etc/ssl/certs/139826.pem"
I1016 18:23:26.312731 455489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139826.pem
I1016 18:23:26.316469 455489 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:51 /usr/share/ca-certificates/139826.pem
I1016 18:23:26.316512 455489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139826.pem
I1016 18:23:26.351846 455489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/139826.pem /etc/ssl/certs/51391683.0"
I1016 18:23:26.359872 455489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1398262.pem && ln -fs /usr/share/ca-certificates/1398262.pem /etc/ssl/certs/1398262.pem"
I1016 18:23:26.368202 455489 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1398262.pem
I1016 18:23:26.371982 455489 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:51 /usr/share/ca-certificates/1398262.pem
I1016 18:23:26.372020 455489 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1398262.pem
I1016 18:23:26.408269 455489 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1398262.pem /etc/ssl/certs/3ec20f2e.0"
I1016 18:23:26.418492 455489 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1016 18:23:26.422684 455489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1016 18:23:26.457331 455489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1016 18:23:26.494338 455489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1016 18:23:26.541707 455489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1016 18:23:26.597395 455489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1016 18:23:26.652673 455489 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1016 18:23:26.710141 455489 kubeadm.go:400] StartCluster: {Name:no-preload-810937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-810937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1016 18:23:26.710263 455489 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1016 18:23:26.710344 455489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1016 18:23:26.748002 455489 cri.go:89] found id: "c07fe15d1dec373a3edbda312574ad1b8bec5773fd3ca6f9a36345c8cc97e042"
I1016 18:23:26.748028 455489 cri.go:89] found id: "7953eff6b0048a5cd0192fd115441e3903364d8161b00e5bcbc41340855bdd80"
I1016 18:23:26.748034 455489 cri.go:89] found id: "819bf1c7a1a88024a93d23312873885cefce4a04f90cb4ecf609ea31eb25599d"
I1016 18:23:26.748039 455489 cri.go:89] found id: "68db773f707edd432041814d556bb5dd627f7bdb9225856cb3bcf7e5eff5d4e3"
I1016 18:23:26.748043 455489 cri.go:89] found id: "fdd889315f0079ecc54ab7454e94ecf94a818b55ed5398232b369c55394cb98a"
I1016 18:23:26.748048 455489 cri.go:89] found id: "7604f6aaa2a5e93d570aa81d26b527882a368f32b1842f6ab9cf3213214311a4"
I1016 18:23:26.748150 455489 cri.go:89] found id: "70335f6b976b74ba7bba6caaec1f0adc2f19e15e22e0f0fe8feca9203e6d3015"
I1016 18:23:26.748170 455489 cri.go:89] found id: "214ad55724b045eb95f3faccd506793ab275176260cfa7e62ea0233bae3a010b"
I1016 18:23:26.748175 455489 cri.go:89] found id: "f3a02a97d71bb07cb46ab4aa521f195e268891e3d69e41f78c36dbde75073c25"
I1016 18:23:26.748183 455489 cri.go:89] found id: "ae2a032cc48f447266e7c816ad378fbd45c3a49bc4c6b68a9670c87bbbb51413"
I1016 18:23:26.748187 455489 cri.go:89] found id: "d62ec3032d7bf5ed1c4a5fb4e7eed61532a9177e9d6f2a601b57a46a9ef60a3d"
I1016 18:23:26.748191 455489 cri.go:89] found id: "99d8cbc7fb70fcbeab01c419b2fdd54f6db61694e646c00ef329611e52188f55"
I1016 18:23:26.748195 455489 cri.go:89] found id: ""
I1016 18:23:26.748245 455489 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1016 18:23:26.780552 455489 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"68db773f707edd432041814d556bb5dd627f7bdb9225856cb3bcf7e5eff5d4e3","pid":943,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/68db773f707edd432041814d556bb5dd627f7bdb9225856cb3bcf7e5eff5d4e3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/68db773f707edd432041814d556bb5dd627f7bdb9225856cb3bcf7e5eff5d4e3/rootfs","created":"2025-10-16T18:23:26.711995561Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"c2095aed7e3c4bab6fc73fcc6b6b85b1d9486e8d714b60ac81c1cf2a9084b06a","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-810937","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"03a04c85d3e01ef61f567e31ab048a7b"},"owner":"root"},{"ociVersion":"1.2.1","id":"757d802ee144d0d950
eeb6661dde0342b70fad06bf56101f69daa29e783b385f","pid":848,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/757d802ee144d0d950eeb6661dde0342b70fad06bf56101f69daa29e783b385f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/757d802ee144d0d950eeb6661dde0342b70fad06bf56101f69daa29e783b385f/rootfs","created":"2025-10-16T18:23:26.59899882Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"757d802ee144d0d950eeb6661dde0342b70fad06bf56101f69daa29e783b385f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-810937_bcecd8e8f67f54b5f7a70a6563c939ba","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-810937","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"bcecd8e8f67f54b5f7a70a6563c939ba"},"own
er":"root"},{"ociVersion":"1.2.1","id":"7953eff6b0048a5cd0192fd115441e3903364d8161b00e5bcbc41340855bdd80","pid":980,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7953eff6b0048a5cd0192fd115441e3903364d8161b00e5bcbc41340855bdd80","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7953eff6b0048a5cd0192fd115441e3903364d8161b00e5bcbc41340855bdd80/rootfs","created":"2025-10-16T18:23:26.724263915Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"757d802ee144d0d950eeb6661dde0342b70fad06bf56101f69daa29e783b385f","io.kubernetes.cri.sandbox-name":"etcd-no-preload-810937","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"bcecd8e8f67f54b5f7a70a6563c939ba"},"owner":"root"},{"ociVersion":"1.2.1","id":"819bf1c7a1a88024a93d23312873885cefce4a04f90cb4ecf609ea31eb25599d","pid":956,"status":"running",
"bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/819bf1c7a1a88024a93d23312873885cefce4a04f90cb4ecf609ea31eb25599d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/819bf1c7a1a88024a93d23312873885cefce4a04f90cb4ecf609ea31eb25599d/rootfs","created":"2025-10-16T18:23:26.722533714Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"9cf9e3ec8330e0e479fa7e1ccd4a6f91b10d0ef1f4f4b3f0bf8be14db1b655c3","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-810937","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3db8b64df52ab7212f4a554714b8d201"},"owner":"root"},{"ociVersion":"1.2.1","id":"986df954547d1a7553139fa2853ef993f649e8bd9c4db6a61f03e2f689d0f426","pid":850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/986df954547d1a7
553139fa2853ef993f649e8bd9c4db6a61f03e2f689d0f426","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/986df954547d1a7553139fa2853ef993f649e8bd9c4db6a61f03e2f689d0f426/rootfs","created":"2025-10-16T18:23:26.601137576Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"986df954547d1a7553139fa2853ef993f649e8bd9c4db6a61f03e2f689d0f426","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-810937_89f32cec407c3d69ff68a50e576ee3a9","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-810937","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"89f32cec407c3d69ff68a50e576ee3a9"},"owner":"root"},{"ociVersion":"1.2.1","id":"9cf9e3ec8330e0e479fa7e1ccd4a6f91b10d0ef1f4f4b3f0bf8be14db1b655c3","pid":816,"status":"running"
,"bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cf9e3ec8330e0e479fa7e1ccd4a6f91b10d0ef1f4f4b3f0bf8be14db1b655c3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cf9e3ec8330e0e479fa7e1ccd4a6f91b10d0ef1f4f4b3f0bf8be14db1b655c3/rootfs","created":"2025-10-16T18:23:26.576538132Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"9cf9e3ec8330e0e479fa7e1ccd4a6f91b10d0ef1f4f4b3f0bf8be14db1b655c3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-810937_3db8b64df52ab7212f4a554714b8d201","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-810937","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3db8b64df52ab7212f4a554714b8d201"},"owner":"root"},{"ociVersion":"1.2.1","id
":"c07fe15d1dec373a3edbda312574ad1b8bec5773fd3ca6f9a36345c8cc97e042","pid":987,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c07fe15d1dec373a3edbda312574ad1b8bec5773fd3ca6f9a36345c8cc97e042","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c07fe15d1dec373a3edbda312574ad1b8bec5773fd3ca6f9a36345c8cc97e042/rootfs","created":"2025-10-16T18:23:26.727037784Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"986df954547d1a7553139fa2853ef993f649e8bd9c4db6a61f03e2f689d0f426","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-810937","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"89f32cec407c3d69ff68a50e576ee3a9"},"owner":"root"},{"ociVersion":"1.2.1","id":"c2095aed7e3c4bab6fc73fcc6b6b85b1d9486e8d714b60ac81c1cf2a9084b06a","pid":806,"status":"running","bundle
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2095aed7e3c4bab6fc73fcc6b6b85b1d9486e8d714b60ac81c1cf2a9084b06a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2095aed7e3c4bab6fc73fcc6b6b85b1d9486e8d714b60ac81c1cf2a9084b06a/rootfs","created":"2025-10-16T18:23:26.575010392Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"c2095aed7e3c4bab6fc73fcc6b6b85b1d9486e8d714b60ac81c1cf2a9084b06a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-810937_03a04c85d3e01ef61f567e31ab048a7b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-810937","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"03a04c85d3e01ef61f567e31ab048a7b"},"owner":"root"}]
I1016 18:23:26.780741 455489 cri.go:126] list returned 8 containers
I1016 18:23:26.780758 455489 cri.go:129] container: {ID:68db773f707edd432041814d556bb5dd627f7bdb9225856cb3bcf7e5eff5d4e3 Status:running}
I1016 18:23:26.780777 455489 cri.go:135] skipping {68db773f707edd432041814d556bb5dd627f7bdb9225856cb3bcf7e5eff5d4e3 running}: state = "running", want "paused"
I1016 18:23:26.780789 455489 cri.go:129] container: {ID:757d802ee144d0d950eeb6661dde0342b70fad06bf56101f69daa29e783b385f Status:running}
I1016 18:23:26.780798 455489 cri.go:131] skipping 757d802ee144d0d950eeb6661dde0342b70fad06bf56101f69daa29e783b385f - not in ps
I1016 18:23:26.780803 455489 cri.go:129] container: {ID:7953eff6b0048a5cd0192fd115441e3903364d8161b00e5bcbc41340855bdd80 Status:running}
I1016 18:23:26.780812 455489 cri.go:135] skipping {7953eff6b0048a5cd0192fd115441e3903364d8161b00e5bcbc41340855bdd80 running}: state = "running", want "paused"
I1016 18:23:26.780818 455489 cri.go:129] container: {ID:819bf1c7a1a88024a93d23312873885cefce4a04f90cb4ecf609ea31eb25599d Status:running}
I1016 18:23:26.780823 455489 cri.go:135] skipping {819bf1c7a1a88024a93d23312873885cefce4a04f90cb4ecf609ea31eb25599d running}: state = "running", want "paused"
I1016 18:23:26.780828 455489 cri.go:129] container: {ID:986df954547d1a7553139fa2853ef993f649e8bd9c4db6a61f03e2f689d0f426 Status:running}
I1016 18:23:26.780835 455489 cri.go:131] skipping 986df954547d1a7553139fa2853ef993f649e8bd9c4db6a61f03e2f689d0f426 - not in ps
I1016 18:23:26.780842 455489 cri.go:129] container: {ID:9cf9e3ec8330e0e479fa7e1ccd4a6f91b10d0ef1f4f4b3f0bf8be14db1b655c3 Status:running}
I1016 18:23:26.780847 455489 cri.go:131] skipping 9cf9e3ec8330e0e479fa7e1ccd4a6f91b10d0ef1f4f4b3f0bf8be14db1b655c3 - not in ps
I1016 18:23:26.780852 455489 cri.go:129] container: {ID:c07fe15d1dec373a3edbda312574ad1b8bec5773fd3ca6f9a36345c8cc97e042 Status:running}
I1016 18:23:26.780860 455489 cri.go:135] skipping {c07fe15d1dec373a3edbda312574ad1b8bec5773fd3ca6f9a36345c8cc97e042 running}: state = "running", want "paused"
I1016 18:23:26.780866 455489 cri.go:129] container: {ID:c2095aed7e3c4bab6fc73fcc6b6b85b1d9486e8d714b60ac81c1cf2a9084b06a Status:running}
I1016 18:23:26.780872 455489 cri.go:131] skipping c2095aed7e3c4bab6fc73fcc6b6b85b1d9486e8d714b60ac81c1cf2a9084b06a - not in ps
I1016 18:23:26.780921 455489 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1016 18:23:26.793764 455489 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I1016 18:23:26.793867 455489 kubeadm.go:597] restartPrimaryControlPlane start ...
I1016 18:23:26.793959 455489 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1016 18:23:26.805726 455489 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1016 18:23:26.807275 455489 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-810937" does not appear in /home/jenkins/minikube-integration/21738-136141/kubeconfig
I1016 18:23:26.808296 455489 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-136141/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-810937" cluster setting kubeconfig missing "no-preload-810937" context setting]
I1016 18:23:26.809697 455489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-136141/kubeconfig: {Name:mkcd7be31e9131e009fd8c01dbeba0d9b0a559bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1016 18:23:26.812105 455489 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1016 18:23:26.823466 455489 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
I1016 18:23:26.823500 455489 kubeadm.go:601] duration metric: took 29.612498ms to restartPrimaryControlPlane
I1016 18:23:26.823511 455489 kubeadm.go:402] duration metric: took 113.383897ms to StartCluster
I1016 18:23:26.823529 455489 settings.go:142] acquiring lock: {Name:mk69e9fda206cb3246d193be5125ea7b81edb7d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1016 18:23:26.823739 455489 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21738-136141/kubeconfig
I1016 18:23:26.826345 455489 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-136141/kubeconfig: {Name:mkcd7be31e9131e009fd8c01dbeba0d9b0a559bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1016 18:23:26.826606 455489 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1016 18:23:26.826677 455489 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1016 18:23:26.826793 455489 addons.go:69] Setting storage-provisioner=true in profile "no-preload-810937"
I1016 18:23:26.826818 455489 addons.go:238] Setting addon storage-provisioner=true in "no-preload-810937"
W1016 18:23:26.826838 455489 addons.go:247] addon storage-provisioner should already be in state true
I1016 18:23:26.826872 455489 host.go:66] Checking if "no-preload-810937" exists ...
I1016 18:23:26.826885 455489 config.go:182] Loaded profile config "no-preload-810937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1016 18:23:26.826965 455489 addons.go:69] Setting default-storageclass=true in profile "no-preload-810937"
I1016 18:23:26.826982 455489 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-810937"
I1016 18:23:26.827379 455489 addons.go:69] Setting metrics-server=true in profile "no-preload-810937"
I1016 18:23:26.827410 455489 addons.go:238] Setting addon metrics-server=true in "no-preload-810937"
W1016 18:23:26.827419 455489 addons.go:247] addon metrics-server should already be in state true
I1016 18:23:26.827423 455489 addons.go:69] Setting dashboard=true in profile "no-preload-810937"
I1016 18:23:26.827452 455489 host.go:66] Checking if "no-preload-810937" exists ...
I1016 18:23:26.827461 455489 cli_runner.go:164] Run: docker container inspect no-preload-810937 --format={{.State.Status}}
I1016 18:23:26.827462 455489 addons.go:238] Setting addon dashboard=true in "no-preload-810937"
W1016 18:23:26.827590 455489 addons.go:247] addon dashboard should already be in state true
I1016 18:23:26.827647 455489 host.go:66] Checking if "no-preload-810937" exists ...
I1016 18:23:26.827887 455489 cli_runner.go:164] Run: docker container inspect no-preload-810937 --format={{.State.Status}}
I1016 18:23:26.828155 455489 cli_runner.go:164] Run: docker container inspect no-preload-810937 --format={{.State.Status}}
I1016 18:23:26.827434 455489 cli_runner.go:164] Run: docker container inspect no-preload-810937 --format={{.State.Status}}
I1016 18:23:26.832153 455489 out.go:179] * Verifying Kubernetes components...
I1016 18:23:26.835132 455489 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1016 18:23:26.860963 455489 addons.go:238] Setting addon default-storageclass=true in "no-preload-810937"
W1016 18:23:26.861014 455489 addons.go:247] addon default-storageclass should already be in state true
I1016 18:23:26.861047 455489 host.go:66] Checking if "no-preload-810937" exists ...
I1016 18:23:26.861580 455489 cli_runner.go:164] Run: docker container inspect no-preload-810937 --format={{.State.Status}}
I1016 18:23:26.861970 455489 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1016 18:23:26.861984 455489 out.go:179] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1016 18:23:26.862050 455489 out.go:179] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1016 18:23:26.863224 455489 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1016 18:23:26.863290 455489 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1016 18:23:26.863267 455489 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1016 18:23:26.863409 455489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1016 18:23:26.863455 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:26.863371 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:26.868446 455489 out.go:179] - Using image registry.k8s.io/echoserver:1.4
I1016 18:23:26.869721 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1016 18:23:26.869775 455489 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1016 18:23:26.869864 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:26.908654 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:26.909334 455489 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I1016 18:23:26.909349 455489 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1016 18:23:26.909428 455489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-810937
I1016 18:23:26.908647 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:26.910381 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:26.937731 455489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-136141/.minikube/machines/no-preload-810937/id_rsa Username:docker}
I1016 18:23:26.995243 455489 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1016 18:23:27.009889 455489 node_ready.go:35] waiting up to 6m0s for node "no-preload-810937" to be "Ready" ...
I1016 18:23:27.031096 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1016 18:23:27.031120 455489 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1016 18:23:27.031140 455489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1016 18:23:27.031599 455489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1016 18:23:27.031614 455489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1016 18:23:27.051790 455489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1016 18:23:27.051812 455489 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1016 18:23:27.052013 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1016 18:23:27.052032 455489 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1016 18:23:27.055476 455489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1016 18:23:27.070540 455489 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1016 18:23:27.070562 455489 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1016 18:23:27.071785 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1016 18:23:27.071803 455489 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1016 18:23:27.088455 455489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1016 18:23:27.090637 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1016 18:23:27.090658 455489 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1016 18:23:27.111879 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1016 18:23:27.111909 455489 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1016 18:23:27.134359 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1016 18:23:27.134386 455489 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1016 18:23:27.157444 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1016 18:23:27.157470 455489 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1016 18:23:27.173897 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1016 18:23:27.173921 455489 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1016 18:23:27.186979 455489 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1016 18:23:27.187003 455489 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1016 18:23:27.199372 455489 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1016 18:23:28.508681 455489 node_ready.go:49] node "no-preload-810937" is "Ready"
I1016 18:23:28.508715 455489 node_ready.go:38] duration metric: took 1.498773852s for node "no-preload-810937" to be "Ready" ...
I1016 18:23:28.508733 455489 api_server.go:52] waiting for apiserver process to appear ...
I1016 18:23:28.508788 455489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1016 18:23:29.241941 455489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.210765157s)
I1016 18:23:29.241991 455489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.186483616s)
I1016 18:23:29.242151 455489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.153663943s)
I1016 18:23:29.242185 455489 addons.go:479] Verifying addon metrics-server=true in "no-preload-810937"
I1016 18:23:29.242257 455489 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.042852604s)
I1016 18:23:29.242279 455489 api_server.go:72] duration metric: took 2.415637902s to wait for apiserver process to appear ...
I1016 18:23:29.242295 455489 api_server.go:88] waiting for apiserver healthz status ...
I1016 18:23:29.242317 455489 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1016 18:23:29.243595 455489 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-810937 addons enable metrics-server
I1016 18:23:29.246990 455489 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:23:29.247033 455489 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1016 18:23:29.250670 455489 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I1016 18:23:29.251738 455489 addons.go:514] duration metric: took 2.425073799s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I1016 18:23:29.742596 455489 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1016 18:23:29.749971 455489 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:23:29.750048 455489 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1016 18:23:26.117117 449077 node_ready.go:57] node "embed-certs-666387" has "Ready":"False" status (will retry)
I1016 18:23:28.617098 449077 node_ready.go:49] node "embed-certs-666387" is "Ready"
I1016 18:23:28.617191 449077 node_ready.go:38] duration metric: took 11.003767767s for node "embed-certs-666387" to be "Ready" ...
I1016 18:23:28.617223 449077 api_server.go:52] waiting for apiserver process to appear ...
I1016 18:23:28.617291 449077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1016 18:23:28.633504 449077 api_server.go:72] duration metric: took 11.308105285s to wait for apiserver process to appear ...
I1016 18:23:28.633531 449077 api_server.go:88] waiting for apiserver healthz status ...
I1016 18:23:28.633554 449077 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
I1016 18:23:28.640667 449077 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
ok
I1016 18:23:28.642075 449077 api_server.go:141] control plane version: v1.34.1
I1016 18:23:28.642133 449077 api_server.go:131] duration metric: took 8.591383ms to wait for apiserver health ...
I1016 18:23:28.642150 449077 system_pods.go:43] waiting for kube-system pods to appear ...
I1016 18:23:28.646140 449077 system_pods.go:59] 8 kube-system pods found
I1016 18:23:28.646194 449077 system_pods.go:61] "coredns-66bc5c9577-4rpls" [4b72f4b7-6ce0-4cbb-bbee-a2b347439fd5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1016 18:23:28.646203 449077 system_pods.go:61] "etcd-embed-certs-666387" [a9abc6c3-86fc-44bf-9171-aa63cfffdd6c] Running
I1016 18:23:28.646212 449077 system_pods.go:61] "kindnet-q96vr" [7286a6c4-2c41-4646-b4cc-11beaeeec612] Running
I1016 18:23:28.646219 449077 system_pods.go:61] "kube-apiserver-embed-certs-666387" [11c0ae31-c275-4b41-9df7-e5858a253058] Running
I1016 18:23:28.646225 449077 system_pods.go:61] "kube-controller-manager-embed-certs-666387" [d18a56c2-ac31-4a55-8915-cef451a3ce15] Running
I1016 18:23:28.646231 449077 system_pods.go:61] "kube-proxy-thmpl" [d819d2d9-75a7-4963-ad2b-868ad6701466] Running
I1016 18:23:28.646241 449077 system_pods.go:61] "kube-scheduler-embed-certs-666387" [aac263d8-eede-4b22-b762-e8b9ec24d5ae] Running
I1016 18:23:28.646250 449077 system_pods.go:61] "storage-provisioner" [867baa84-5b54-4886-baa2-7f550d2ce156] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1016 18:23:28.646262 449077 system_pods.go:74] duration metric: took 4.092412ms to wait for pod list to return data ...
I1016 18:23:28.646272 449077 default_sa.go:34] waiting for default service account to be created ...
I1016 18:23:28.649312 449077 default_sa.go:45] found service account: "default"
I1016 18:23:28.649335 449077 default_sa.go:55] duration metric: took 3.055522ms for default service account to be created ...
I1016 18:23:28.649345 449077 system_pods.go:116] waiting for k8s-apps to be running ...
I1016 18:23:28.652599 449077 system_pods.go:86] 8 kube-system pods found
I1016 18:23:28.652634 449077 system_pods.go:89] "coredns-66bc5c9577-4rpls" [4b72f4b7-6ce0-4cbb-bbee-a2b347439fd5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1016 18:23:28.652642 449077 system_pods.go:89] "etcd-embed-certs-666387" [a9abc6c3-86fc-44bf-9171-aa63cfffdd6c] Running
I1016 18:23:28.652650 449077 system_pods.go:89] "kindnet-q96vr" [7286a6c4-2c41-4646-b4cc-11beaeeec612] Running
I1016 18:23:28.652660 449077 system_pods.go:89] "kube-apiserver-embed-certs-666387" [11c0ae31-c275-4b41-9df7-e5858a253058] Running
I1016 18:23:28.652666 449077 system_pods.go:89] "kube-controller-manager-embed-certs-666387" [d18a56c2-ac31-4a55-8915-cef451a3ce15] Running
I1016 18:23:28.652670 449077 system_pods.go:89] "kube-proxy-thmpl" [d819d2d9-75a7-4963-ad2b-868ad6701466] Running
I1016 18:23:28.652676 449077 system_pods.go:89] "kube-scheduler-embed-certs-666387" [aac263d8-eede-4b22-b762-e8b9ec24d5ae] Running
I1016 18:23:28.652683 449077 system_pods.go:89] "storage-provisioner" [867baa84-5b54-4886-baa2-7f550d2ce156] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1016 18:23:28.652711 449077 retry.go:31] will retry after 248.412984ms: missing components: kube-dns
I1016 18:23:28.907200 449077 system_pods.go:86] 8 kube-system pods found
I1016 18:23:28.907247 449077 system_pods.go:89] "coredns-66bc5c9577-4rpls" [4b72f4b7-6ce0-4cbb-bbee-a2b347439fd5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1016 18:23:28.907256 449077 system_pods.go:89] "etcd-embed-certs-666387" [a9abc6c3-86fc-44bf-9171-aa63cfffdd6c] Running
I1016 18:23:28.907264 449077 system_pods.go:89] "kindnet-q96vr" [7286a6c4-2c41-4646-b4cc-11beaeeec612] Running
I1016 18:23:28.907270 449077 system_pods.go:89] "kube-apiserver-embed-certs-666387" [11c0ae31-c275-4b41-9df7-e5858a253058] Running
I1016 18:23:28.907278 449077 system_pods.go:89] "kube-controller-manager-embed-certs-666387" [d18a56c2-ac31-4a55-8915-cef451a3ce15] Running
I1016 18:23:28.907284 449077 system_pods.go:89] "kube-proxy-thmpl" [d819d2d9-75a7-4963-ad2b-868ad6701466] Running
I1016 18:23:28.907290 449077 system_pods.go:89] "kube-scheduler-embed-certs-666387" [aac263d8-eede-4b22-b762-e8b9ec24d5ae] Running
I1016 18:23:28.907302 449077 system_pods.go:89] "storage-provisioner" [867baa84-5b54-4886-baa2-7f550d2ce156] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1016 18:23:28.907322 449077 retry.go:31] will retry after 390.544065ms: missing components: kube-dns
I1016 18:23:29.303082 449077 system_pods.go:86] 8 kube-system pods found
I1016 18:23:29.303127 449077 system_pods.go:89] "coredns-66bc5c9577-4rpls" [4b72f4b7-6ce0-4cbb-bbee-a2b347439fd5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1016 18:23:29.303134 449077 system_pods.go:89] "etcd-embed-certs-666387" [a9abc6c3-86fc-44bf-9171-aa63cfffdd6c] Running
I1016 18:23:29.303142 449077 system_pods.go:89] "kindnet-q96vr" [7286a6c4-2c41-4646-b4cc-11beaeeec612] Running
I1016 18:23:29.303147 449077 system_pods.go:89] "kube-apiserver-embed-certs-666387" [11c0ae31-c275-4b41-9df7-e5858a253058] Running
I1016 18:23:29.303153 449077 system_pods.go:89] "kube-controller-manager-embed-certs-666387" [d18a56c2-ac31-4a55-8915-cef451a3ce15] Running
I1016 18:23:29.303162 449077 system_pods.go:89] "kube-proxy-thmpl" [d819d2d9-75a7-4963-ad2b-868ad6701466] Running
I1016 18:23:29.303169 449077 system_pods.go:89] "kube-scheduler-embed-certs-666387" [aac263d8-eede-4b22-b762-e8b9ec24d5ae] Running
I1016 18:23:29.303179 449077 system_pods.go:89] "storage-provisioner" [867baa84-5b54-4886-baa2-7f550d2ce156] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1016 18:23:29.303200 449077 retry.go:31] will retry after 411.019664ms: missing components: kube-dns
I1016 18:23:29.723478 449077 system_pods.go:86] 8 kube-system pods found
I1016 18:23:29.723515 449077 system_pods.go:89] "coredns-66bc5c9577-4rpls" [4b72f4b7-6ce0-4cbb-bbee-a2b347439fd5] Running
I1016 18:23:29.723522 449077 system_pods.go:89] "etcd-embed-certs-666387" [a9abc6c3-86fc-44bf-9171-aa63cfffdd6c] Running
I1016 18:23:29.723528 449077 system_pods.go:89] "kindnet-q96vr" [7286a6c4-2c41-4646-b4cc-11beaeeec612] Running
I1016 18:23:29.723533 449077 system_pods.go:89] "kube-apiserver-embed-certs-666387" [11c0ae31-c275-4b41-9df7-e5858a253058] Running
I1016 18:23:29.723547 449077 system_pods.go:89] "kube-controller-manager-embed-certs-666387" [d18a56c2-ac31-4a55-8915-cef451a3ce15] Running
I1016 18:23:29.723552 449077 system_pods.go:89] "kube-proxy-thmpl" [d819d2d9-75a7-4963-ad2b-868ad6701466] Running
I1016 18:23:29.723557 449077 system_pods.go:89] "kube-scheduler-embed-certs-666387" [aac263d8-eede-4b22-b762-e8b9ec24d5ae] Running
I1016 18:23:29.723561 449077 system_pods.go:89] "storage-provisioner" [867baa84-5b54-4886-baa2-7f550d2ce156] Running
I1016 18:23:29.723572 449077 system_pods.go:126] duration metric: took 1.074220079s to wait for k8s-apps to be running ...
I1016 18:23:29.723581 449077 system_svc.go:44] waiting for kubelet service to be running ....
I1016 18:23:29.723637 449077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1016 18:23:29.750186 449077 system_svc.go:56] duration metric: took 26.592335ms WaitForService to wait for kubelet
I1016 18:23:29.750227 449077 kubeadm.go:586] duration metric: took 12.424832909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1016 18:23:29.750263 449077 node_conditions.go:102] verifying NodePressure condition ...
I1016 18:23:29.756929 449077 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1016 18:23:29.756964 449077 node_conditions.go:123] node cpu capacity is 8
I1016 18:23:29.756981 449077 node_conditions.go:105] duration metric: took 6.712923ms to run NodePressure ...
I1016 18:23:29.756997 449077 start.go:241] waiting for startup goroutines ...
I1016 18:23:29.757006 449077 start.go:246] waiting for cluster config update ...
I1016 18:23:29.757019 449077 start.go:255] writing updated cluster config ...
I1016 18:23:29.757359 449077 ssh_runner.go:195] Run: rm -f paused
I1016 18:23:29.763110 449077 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1016 18:23:29.768170 449077 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4rpls" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:29.773345 449077 pod_ready.go:94] pod "coredns-66bc5c9577-4rpls" is "Ready"
I1016 18:23:29.773372 449077 pod_ready.go:86] duration metric: took 5.156201ms for pod "coredns-66bc5c9577-4rpls" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:29.778774 449077 pod_ready.go:83] waiting for pod "etcd-embed-certs-666387" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:29.784099 449077 pod_ready.go:94] pod "etcd-embed-certs-666387" is "Ready"
I1016 18:23:29.784127 449077 pod_ready.go:86] duration metric: took 5.319818ms for pod "etcd-embed-certs-666387" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:29.868520 449077 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-666387" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:29.873220 449077 pod_ready.go:94] pod "kube-apiserver-embed-certs-666387" is "Ready"
I1016 18:23:29.873246 449077 pod_ready.go:86] duration metric: took 4.685416ms for pod "kube-apiserver-embed-certs-666387" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:29.875230 449077 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-666387" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:30.168592 449077 pod_ready.go:94] pod "kube-controller-manager-embed-certs-666387" is "Ready"
I1016 18:23:30.168630 449077 pod_ready.go:86] duration metric: took 293.371443ms for pod "kube-controller-manager-embed-certs-666387" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:30.368101 449077 pod_ready.go:83] waiting for pod "kube-proxy-thmpl" in "kube-system" namespace to be "Ready" or be gone ...
W1016 18:23:27.486639 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
W1016 18:23:29.487626 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
I1016 18:23:30.768518 449077 pod_ready.go:94] pod "kube-proxy-thmpl" is "Ready"
I1016 18:23:30.768552 449077 pod_ready.go:86] duration metric: took 400.422228ms for pod "kube-proxy-thmpl" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:30.968262 449077 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-666387" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:31.368531 449077 pod_ready.go:94] pod "kube-scheduler-embed-certs-666387" is "Ready"
I1016 18:23:31.368564 449077 pod_ready.go:86] duration metric: took 400.266578ms for pod "kube-scheduler-embed-certs-666387" in "kube-system" namespace to be "Ready" or be gone ...
I1016 18:23:31.368580 449077 pod_ready.go:40] duration metric: took 1.60543065s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1016 18:23:31.414849 449077 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
I1016 18:23:31.416741 449077 out.go:179] * Done! kubectl is now configured to use "embed-certs-666387" cluster and "default" namespace by default
I1016 18:23:30.243410 455489 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1016 18:23:30.248129 455489 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1016 18:23:30.249214 455489 api_server.go:141] control plane version: v1.34.1
I1016 18:23:30.249239 455489 api_server.go:131] duration metric: took 1.006935158s to wait for apiserver health ...
I1016 18:23:30.249247 455489 system_pods.go:43] waiting for kube-system pods to appear ...
I1016 18:23:30.253878 455489 system_pods.go:59] 9 kube-system pods found
I1016 18:23:30.253977 455489 system_pods.go:61] "coredns-66bc5c9577-z4hg5" [efda5c82-ee23-48ef-8c2e-28074ed85153] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1016 18:23:30.254009 455489 system_pods.go:61] "etcd-no-preload-810937" [faa1b8f1-5d4e-4500-8e14-a0d0ee2a0e5a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1016 18:23:30.254018 455489 system_pods.go:61] "kindnet-5sbf9" [45ed06bc-31e5-4fde-9dbf-a6eaa8b6d938] Running
I1016 18:23:30.254037 455489 system_pods.go:61] "kube-apiserver-no-preload-810937" [6f230cf7-f527-4b0a-a63e-2fe24135c87e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1016 18:23:30.254047 455489 system_pods.go:61] "kube-controller-manager-no-preload-810937" [12dceaec-2dd8-413c-a187-4a70f85ff74b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1016 18:23:30.254232 455489 system_pods.go:61] "kube-proxy-6qs44" [e2e37dee-fd6b-455f-ad48-dbbdee8879da] Running
I1016 18:23:30.254314 455489 system_pods.go:61] "kube-scheduler-no-preload-810937" [85b07845-4cb2-441b-a597-d8b4ff9b8bae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1016 18:23:30.254330 455489 system_pods.go:61] "metrics-server-746fcd58dc-qwt2r" [43f8ae7b-748a-4d49-b73d-be4a0e7ce7bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1016 18:23:30.254744 455489 system_pods.go:61] "storage-provisioner" [2988a552-46dd-4cb6-b979-dfe034907971] Running
I1016 18:23:30.254760 455489 system_pods.go:74] duration metric: took 5.506719ms to wait for pod list to return data ...
I1016 18:23:30.254769 455489 default_sa.go:34] waiting for default service account to be created ...
I1016 18:23:30.257744 455489 default_sa.go:45] found service account: "default"
I1016 18:23:30.257766 455489 default_sa.go:55] duration metric: took 2.990811ms for default service account to be created ...
I1016 18:23:30.257774 455489 system_pods.go:116] waiting for k8s-apps to be running ...
I1016 18:23:30.260726 455489 system_pods.go:86] 9 kube-system pods found
I1016 18:23:30.260756 455489 system_pods.go:89] "coredns-66bc5c9577-z4hg5" [efda5c82-ee23-48ef-8c2e-28074ed85153] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1016 18:23:30.260767 455489 system_pods.go:89] "etcd-no-preload-810937" [faa1b8f1-5d4e-4500-8e14-a0d0ee2a0e5a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1016 18:23:30.260774 455489 system_pods.go:89] "kindnet-5sbf9" [45ed06bc-31e5-4fde-9dbf-a6eaa8b6d938] Running
I1016 18:23:30.260786 455489 system_pods.go:89] "kube-apiserver-no-preload-810937" [6f230cf7-f527-4b0a-a63e-2fe24135c87e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1016 18:23:30.260796 455489 system_pods.go:89] "kube-controller-manager-no-preload-810937" [12dceaec-2dd8-413c-a187-4a70f85ff74b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1016 18:23:30.260807 455489 system_pods.go:89] "kube-proxy-6qs44" [e2e37dee-fd6b-455f-ad48-dbbdee8879da] Running
I1016 18:23:30.260817 455489 system_pods.go:89] "kube-scheduler-no-preload-810937" [85b07845-4cb2-441b-a597-d8b4ff9b8bae] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1016 18:23:30.260827 455489 system_pods.go:89] "metrics-server-746fcd58dc-qwt2r" [43f8ae7b-748a-4d49-b73d-be4a0e7ce7bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1016 18:23:30.260840 455489 system_pods.go:89] "storage-provisioner" [2988a552-46dd-4cb6-b979-dfe034907971] Running
I1016 18:23:30.260854 455489 system_pods.go:126] duration metric: took 3.073373ms to wait for k8s-apps to be running ...
I1016 18:23:30.260868 455489 system_svc.go:44] waiting for kubelet service to be running ....
I1016 18:23:30.260920 455489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1016 18:23:30.275311 455489 system_svc.go:56] duration metric: took 14.433469ms WaitForService to wait for kubelet
I1016 18:23:30.275344 455489 kubeadm.go:586] duration metric: took 3.448703698s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1016 18:23:30.275369 455489 node_conditions.go:102] verifying NodePressure condition ...
I1016 18:23:30.278734 455489 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1016 18:23:30.278768 455489 node_conditions.go:123] node cpu capacity is 8
I1016 18:23:30.278783 455489 node_conditions.go:105] duration metric: took 3.407329ms to run NodePressure ...
I1016 18:23:30.278798 455489 start.go:241] waiting for startup goroutines ...
I1016 18:23:30.278808 455489 start.go:246] waiting for cluster config update ...
I1016 18:23:30.278822 455489 start.go:255] writing updated cluster config ...
I1016 18:23:30.279140 455489 ssh_runner.go:195] Run: rm -f paused
I1016 18:23:30.283165 455489 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1016 18:23:30.286853 455489 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z4hg5" in "kube-system" namespace to be "Ready" or be gone ...
W1016 18:23:32.291880 455489 pod_ready.go:104] pod "coredns-66bc5c9577-z4hg5" is not "Ready", error: <nil>
W1016 18:23:34.292174 455489 pod_ready.go:104] pod "coredns-66bc5c9577-z4hg5" is not "Ready", error: <nil>
W1016 18:23:31.988994 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
W1016 18:23:34.484924 447145 pod_ready.go:104] pod "coredns-5dd5756b68-v7w44" is not "Ready", error: <nil>
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
bae3d0ad615cf c80c8dbafe7dd 23 seconds ago Running kube-controller-manager 1 5eef5cfd49f83 kube-controller-manager-kubernetes-upgrade-578123 kube-system
16b1c8f11304e c3994bc696102 28 seconds ago Exited kube-apiserver 7 879cd4aeb65d0 kube-apiserver-kubernetes-upgrade-578123 kube-system
2d23a8b5c235b c80c8dbafe7dd 4 minutes ago Exited kube-controller-manager 0 5eef5cfd49f83 kube-controller-manager-kubernetes-upgrade-578123 kube-system
b8f0d1ecab4ab 5f1f5298c888d 5 minutes ago Running etcd 0 fae02c3e9bdd7 etcd-kubernetes-upgrade-578123 kube-system
97444e7e2ccef 7dd6aaa1717ab 5 minutes ago Running kube-scheduler 0 60ae4f5965275 kube-scheduler-kubernetes-upgrade-578123 kube-system
==> containerd <==
Oct 16 18:20:17 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:20:17.597713955Z" level=info msg="received exit event container_id:\"8fcf7b3c918390cd11112d34f86590f9bf576d92173bdbef72af13d55477314a\" id:\"8fcf7b3c918390cd11112d34f86590f9bf576d92173bdbef72af13d55477314a\" pid:3404 exit_status:1 exited_at:{seconds:1760638817 nanos:597322909}"
Oct 16 18:20:17 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:20:17.936332830Z" level=info msg="shim disconnected" id=8fcf7b3c918390cd11112d34f86590f9bf576d92173bdbef72af13d55477314a namespace=k8s.io
Oct 16 18:20:17 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:20:17.936587500Z" level=warning msg="cleaning up after shim disconnected" id=8fcf7b3c918390cd11112d34f86590f9bf576d92173bdbef72af13d55477314a namespace=k8s.io
Oct 16 18:20:17 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:20:17.936610794Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 16 18:20:18 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:20:18.857458862Z" level=info msg="RemoveContainer for \"5e60af894f14ac845c1b7a4ab66f10b69fcb87a1381cfb818e4dacdc63b42c63\""
Oct 16 18:20:18 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:20:18.861273483Z" level=info msg="RemoveContainer for \"5e60af894f14ac845c1b7a4ab66f10b69fcb87a1381cfb818e4dacdc63b42c63\" returns successfully"
Oct 16 18:22:37 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:22:37.268079928Z" level=info msg="received exit event container_id:\"2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046\" id:\"2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046\" pid:3356 exit_status:1 exited_at:{seconds:1760638957 nanos:267720135}"
Oct 16 18:22:37 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:22:37.294383092Z" level=info msg="shim disconnected" id=2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046 namespace=k8s.io
Oct 16 18:22:37 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:22:37.294466154Z" level=warning msg="cleaning up after shim disconnected" id=2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046 namespace=k8s.io
Oct 16 18:22:37 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:22:37.294484473Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 16 18:23:05 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:05.310693759Z" level=error msg="ContainerStatus for \"6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6149a54630b98df191c6cd32981eae38a2661470fa364c4610cf557cd42f7d75\": not found"
Oct 16 18:23:08 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:08.375739606Z" level=info msg="CreateContainer within sandbox \"879cd4aeb65d0c7a5fe03548370da41d11f088a8162c240224fd75b949a9ea9f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:7,}"
Oct 16 18:23:08 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:08.386366932Z" level=info msg="CreateContainer within sandbox \"879cd4aeb65d0c7a5fe03548370da41d11f088a8162c240224fd75b949a9ea9f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:7,} returns container id \"16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d\""
Oct 16 18:23:08 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:08.387198178Z" level=info msg="StartContainer for \"16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d\""
Oct 16 18:23:08 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:08.485683395Z" level=info msg="StartContainer for \"16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d\" returns successfully"
Oct 16 18:23:08 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:08.546531825Z" level=info msg="received exit event container_id:\"16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d\" id:\"16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d\" pid:3674 exit_status:1 exited_at:{seconds:1760638988 nanos:546186324}"
Oct 16 18:23:08 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:08.571744869Z" level=info msg="shim disconnected" id=16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d namespace=k8s.io
Oct 16 18:23:08 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:08.571797605Z" level=warning msg="cleaning up after shim disconnected" id=16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d namespace=k8s.io
Oct 16 18:23:08 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:08.571810625Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 16 18:23:09 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:09.210029690Z" level=info msg="RemoveContainer for \"8fcf7b3c918390cd11112d34f86590f9bf576d92173bdbef72af13d55477314a\""
Oct 16 18:23:09 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:09.213751928Z" level=info msg="RemoveContainer for \"8fcf7b3c918390cd11112d34f86590f9bf576d92173bdbef72af13d55477314a\" returns successfully"
Oct 16 18:23:13 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:13.146016474Z" level=info msg="CreateContainer within sandbox \"5eef5cfd49f831b9ddb3fb6e6bae2a5f2a0d9c37644995fcbd4fda166f31de12\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Oct 16 18:23:13 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:13.415850551Z" level=info msg="CreateContainer within sandbox \"5eef5cfd49f831b9ddb3fb6e6bae2a5f2a0d9c37644995fcbd4fda166f31de12\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bae3d0ad615cfbefc694d162898a58aa14bc8566fa4adaafe496000ad66b725b\""
Oct 16 18:23:13 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:13.416621021Z" level=info msg="StartContainer for \"bae3d0ad615cfbefc694d162898a58aa14bc8566fa4adaafe496000ad66b725b\""
Oct 16 18:23:13 kubernetes-upgrade-578123 containerd[2068]: time="2025-10-16T18:23:13.536686047Z" level=info msg="StartContainer for \"bae3d0ad615cfbefc694d162898a58aa14bc8566fa4adaafe496000ad66b725b\" returns successfully"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
==> dmesg <==
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a e8 60 dc 20 75 08 06
[ +14.385483] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a e0 64 be a4 fa 08 06
[ +0.011750] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 18 3a 91 13 31 08 06
[ +3.880764] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 1b ca 8e c5 a5 08 06
[ +0.000435] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a e8 60 dc 20 75 08 06
[ +5.867746] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff 8e 11 51 90 f2 6e 08 06
[Oct16 18:21] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 37 9d 00 e7 c2 08 06
[ +0.000305] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 18 3a 91 13 31 08 06
[ +9.244772] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 0e 79 1d 0e 11 08 06
[ +0.000408] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff 8e 11 51 90 f2 6e 08 06
[ +19.336528] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 33 57 3c c8 67 08 06
[ +15.785379] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 f4 49 25 27 fc 08 06
[ +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 33 57 3c c8 67 08 06
==> etcd [b8f0d1ecab4abecf85aafc218ec23ecab7d4e25e0b32eea84aa44415f6e25038] <==
{"level":"info","ts":"2025-10-16T18:17:41.164888Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 4"}
{"level":"info","ts":"2025-10-16T18:17:41.165382Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
{"level":"info","ts":"2025-10-16T18:17:41.165413Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2025-10-16T18:17:41.165436Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 4"}
{"level":"info","ts":"2025-10-16T18:17:41.165456Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
{"level":"info","ts":"2025-10-16T18:17:41.166283Z","caller":"etcdserver/server.go:2409","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
{"level":"info","ts":"2025-10-16T18:17:41.166603Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:kubernetes-upgrade-578123 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
{"level":"info","ts":"2025-10-16T18:17:41.166741Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-10-16T18:17:41.166848Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.5","to":"3.6"}
{"level":"info","ts":"2025-10-16T18:17:41.166772Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-10-16T18:17:41.167016Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
{"level":"info","ts":"2025-10-16T18:17:41.166788Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-10-16T18:17:41.167193Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-10-16T18:17:41.167074Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
{"level":"info","ts":"2025-10-16T18:17:41.167116Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
{"level":"info","ts":"2025-10-16T18:17:41.167280Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
{"level":"info","ts":"2025-10-16T18:17:41.167956Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"warn","ts":"2025-10-16T18:17:41.168159Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
{"level":"info","ts":"2025-10-16T18:17:41.168288Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-10-16T18:17:41.171886Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
{"level":"info","ts":"2025-10-16T18:17:41.171959Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-10-16T18:17:42.905445Z","caller":"traceutil/trace.go:172","msg":"trace[272684485] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"121.599238ms","start":"2025-10-16T18:17:42.783824Z","end":"2025-10-16T18:17:42.905423Z","steps":["trace[272684485] 'process raft request' (duration: 121.359335ms)"],"step_count":1}
{"level":"info","ts":"2025-10-16T18:17:53.221345Z","caller":"traceutil/trace.go:172","msg":"trace[768023137] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"171.086304ms","start":"2025-10-16T18:17:53.050246Z","end":"2025-10-16T18:17:53.221332Z","steps":["trace[768023137] 'process raft request' (duration: 170.952489ms)"],"step_count":1}
{"level":"warn","ts":"2025-10-16T18:17:53.499777Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.698274ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-10-16T18:17:53.499853Z","caller":"traceutil/trace.go:172","msg":"trace[1587675604] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:385; }","duration":"113.795479ms","start":"2025-10-16T18:17:53.386043Z","end":"2025-10-16T18:17:53.499838Z","steps":["trace[1587675604] 'range keys from in-memory index tree' (duration: 113.635465ms)"],"step_count":1}
==> kernel <==
18:24:37 up 1:06, 0 user, load average: 3.25, 3.47, 7.60
Linux kubernetes-upgrade-578123 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kube-apiserver [16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d] <==
I1016 18:23:08.539257 1 options.go:263] external host was not specified, using 192.168.85.2
I1016 18:23:08.541520 1 server.go:150] Version: v1.34.1
I1016 18:23:08.541554 1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
E1016 18:23:08.541862 1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use"
==> kube-controller-manager [2d23a8b5c235baa7a772d10b3be8f9b7d7f383805dceed0739c512370b873046] <==
I1016 18:19:35.475560 1 serving.go:386] Generated self-signed cert in-memory
I1016 18:19:36.247222 1 controllermanager.go:191] "Starting" version="v1.34.1"
I1016 18:19:36.247248 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1016 18:19:36.249645 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1016 18:19:36.249751 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1016 18:19:36.249827 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1016 18:19:36.249990 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1016 18:22:37.263524 1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: the server was unable to return a response in the time allotted, but may still be processing the request"
==> kube-controller-manager [bae3d0ad615cfbefc694d162898a58aa14bc8566fa4adaafe496000ad66b725b] <==
I1016 18:23:13.905664 1 serving.go:386] Generated self-signed cert in-memory
I1016 18:23:14.262478 1 controllermanager.go:191] "Starting" version="v1.34.1"
I1016 18:23:14.262508 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1016 18:23:14.264686 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1016 18:23:14.264906 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1016 18:23:14.265037 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1016 18:23:14.265135 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
==> kube-scheduler [97444e7e2cceff8652c2a875cf9a9bd355842660ee69fa42f94c3dff20935639] <==
I1016 18:17:41.269410 1 serving.go:386] Generated self-signed cert in-memory
W1016 18:18:41.493928 1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
W1016 18:18:41.493956 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1016 18:18:41.493965 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1016 18:18:41.505749 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1016 18:18:41.505773 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1016 18:18:41.507617 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1016 18:18:41.507652 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1016 18:18:41.507992 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1016 18:18:41.508052 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1016 18:18:41.608182 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E1016 18:19:15.611595 1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.186f0b96cb484ce4 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-10-16 18:18:41.60864287 +0000 UTC m=+60.876519969,Series:nil,ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-578123,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner f73ce66c-26aa-4659-8489-340862be508e v1 378 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,Depreca
tedCount:0,}"
E1016 18:19:15.612322 1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
E1016 18:24:15.617259 1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
E1016 18:24:15.617259 1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.186f0b96cb484ce4 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-10-16 18:18:41.60864287 +0000 UTC m=+60.876519969,Series:&EventSeries{Count:2,LastObservedTime:2025-10-16 18:23:41.61490269 +0000 UTC m=+360.882779782,},ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-578123,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner f73ce66c-26aa-4659-8489-340862be508e v1 378 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestamp
:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
==> kubelet <==
Oct 16 18:23:49 kubernetes-upgrade-578123 kubelet[1162]: I1016 18:23:49.371520 1162 kubelet.go:3202] "Trying to delete pod" pod="kube-system/etcd-kubernetes-upgrade-578123" podUID="f50cd0df-0c43-4d16-ab8d-a060fc3c66c5"
Oct 16 18:23:50 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:23:50.481688 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:23:54 kubernetes-upgrade-578123 kubelet[1162]: I1016 18:23:54.371504 1162 scope.go:117] "RemoveContainer" containerID="16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d"
Oct 16 18:23:54 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:23:54.371653 1162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-578123_kube-system(425ef690e835fe78102c4163b6eec258)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-578123" podUID="425ef690e835fe78102c4163b6eec258"
Oct 16 18:23:55 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:23:55.482892 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:24:00 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:00.484449 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:24:03 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:03.433322 1162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-578123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
Oct 16 18:24:05 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:05.486004 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:24:08 kubernetes-upgrade-578123 kubelet[1162]: I1016 18:24:08.372263 1162 scope.go:117] "RemoveContainer" containerID="16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d"
Oct 16 18:24:08 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:08.372414 1162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-578123_kube-system(425ef690e835fe78102c4163b6eec258)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-578123" podUID="425ef690e835fe78102c4163b6eec258"
Oct 16 18:24:10 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:10.487674 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:24:15 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:15.488663 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:24:19 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:19.247573 1162 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kubernetes-upgrade-578123.186f0b7bba5b9c7f default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-578123,UID:kubernetes-upgrade-578123,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node kubernetes-upgrade-578123 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-578123,},FirstTimestamp:2025-10-16 18:16:45.360602239 +0000 UTC m=+0.083815530,LastTimestamp:2025-10-16 18:16:45.471816426 +0000 UTC m=+0.195029706,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-578123,
}"
Oct 16 18:24:19 kubernetes-upgrade-578123 kubelet[1162]: I1016 18:24:19.371679 1162 scope.go:117] "RemoveContainer" containerID="16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d"
Oct 16 18:24:19 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:19.371873 1162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-578123_kube-system(425ef690e835fe78102c4163b6eec258)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-578123" podUID="425ef690e835fe78102c4163b6eec258"
Oct 16 18:24:20 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:20.434792 1162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-578123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
Oct 16 18:24:20 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:20.489649 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:24:22 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:22.297089 1162 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-controller-manager-kubernetes-upgrade-578123"
Oct 16 18:24:23 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:23.374042 1162 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/etcd-kubernetes-upgrade-578123"
Oct 16 18:24:25 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:25.490511 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:24:29 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:29.391188 1162 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-kubernetes-upgrade-578123)" podUID="425ef690e835fe78102c4163b6eec258" pod="kube-system/kube-apiserver-kubernetes-upgrade-578123"
Oct 16 18:24:30 kubernetes-upgrade-578123 kubelet[1162]: I1016 18:24:30.372114 1162 scope.go:117] "RemoveContainer" containerID="16b1c8f11304e7fb20f48a5779bf1764f2e9ad8ce6adb280b7d4d13980f4183d"
Oct 16 18:24:30 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:30.372296 1162 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-578123_kube-system(425ef690e835fe78102c4163b6eec258)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-578123" podUID="425ef690e835fe78102c4163b6eec258"
Oct 16 18:24:30 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:30.491713 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Oct 16 18:24:35 kubernetes-upgrade-578123 kubelet[1162]: E1016 18:24:35.493438 1162 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-578123 -n kubernetes-upgrade-578123
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-578123 -n kubernetes-upgrade-578123: exit status 2 (15.944128388s)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-578123" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-578123" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p kubernetes-upgrade-578123
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-578123: (2.707796614s)
--- FAIL: TestKubernetesUpgrade (530.56s)