=== RUN TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (36.466145868s)
version_upgrade_test.go:227: (dbg) Run: out/minikube-linux-arm64 stop -p kubernetes-upgrade-522575
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-522575: (1.327491946s)
version_upgrade_test.go:232: (dbg) Run: out/minikube-linux-arm64 -p kubernetes-upgrade-522575 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-522575 status --format={{.Host}}: exit status 7 (74.401903ms)
-- stdout --
Stopped
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
E0908 11:11:42.867863 2179425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/addons-151437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (33.991187551s)
version_upgrade_test.go:248: (dbg) Run: kubectl --context kubernetes-upgrade-522575 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker --container-runtime=containerd: exit status 106 (192.542441ms)
-- stdout --
* [kubernetes-upgrade-522575] minikube v1.36.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=21512
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21512-2177568/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-2177568/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
-- /stdout --
** stderr **
X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
* Suggestion:
1) Recreate the cluster with Kubernetes 1.28.0, by running:
minikube delete -p kubernetes-upgrade-522575
minikube start -p kubernetes-upgrade-522575 --kubernetes-version=v1.28.0
2) Create a second cluster with Kubernetes 1.28.0, by running:
minikube start -p kubernetes-upgrade-5225752 --kubernetes-version=v1.28.0
3) Use the existing cluster at version Kubernetes 1.34.0, by running:
minikube start -p kubernetes-upgrade-522575 --kubernetes-version=v1.34.0
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 80 (3m8.677791013s)
-- stdout --
* [kubernetes-upgrade-522575] minikube v1.36.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=21512
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21512-2177568/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-2177568/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on existing profile
* Starting "kubernetes-upgrade-522575" primary control-plane node in "kubernetes-upgrade-522575" cluster
* Pulling base image v0.0.47-1756980985-21488 ...
* Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
** stderr **
I0908 11:12:12.426840 2350325 out.go:360] Setting OutFile to fd 1 ...
I0908 11:12:12.427062 2350325 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:12:12.427091 2350325 out.go:374] Setting ErrFile to fd 2...
I0908 11:12:12.427108 2350325 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:12:12.427997 2350325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-2177568/.minikube/bin
I0908 11:12:12.430085 2350325 out.go:368] Setting JSON to false
I0908 11:12:12.433392 2350325 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":60884,"bootTime":1757269048,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0908 11:12:12.433536 2350325 start.go:140] virtualization:
I0908 11:12:12.437257 2350325 out.go:179] * [kubernetes-upgrade-522575] minikube v1.36.0 on Ubuntu 20.04 (arm64)
I0908 11:12:12.440450 2350325 out.go:179] - MINIKUBE_LOCATION=21512
I0908 11:12:12.440517 2350325 notify.go:220] Checking for updates...
I0908 11:12:12.446305 2350325 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0908 11:12:12.449242 2350325 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21512-2177568/kubeconfig
I0908 11:12:12.452181 2350325 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-2177568/.minikube
I0908 11:12:12.455309 2350325 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0908 11:12:12.458171 2350325 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0908 11:12:12.462203 2350325 config.go:182] Loaded profile config "kubernetes-upgrade-522575": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 11:12:12.462980 2350325 driver.go:421] Setting default libvirt URI to qemu:///system
I0908 11:12:12.498545 2350325 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
I0908 11:12:12.498720 2350325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 11:12:12.645129 2350325 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:true NGoroutines:68 SystemTime:2025-09-08 11:12:12.619388231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 11:12:12.645250 2350325 docker.go:318] overlay module found
I0908 11:12:12.648340 2350325 out.go:179] * Using the docker driver based on existing profile
I0908 11:12:12.651553 2350325 start.go:304] selected driver: docker
I0908 11:12:12.651574 2350325 start.go:918] validating driver "docker" against &{Name:kubernetes-upgrade-522575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-522575 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 11:12:12.651668 2350325 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0908 11:12:12.652342 2350325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 11:12:12.808478 2350325 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:64 SystemTime:2025-09-08 11:12:12.795498998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 11:12:12.808912 2350325 cni.go:84] Creating CNI manager for ""
I0908 11:12:12.808974 2350325 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0908 11:12:12.809037 2350325 start.go:348] cluster config:
{Name:kubernetes-upgrade-522575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-522575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 11:12:12.812613 2350325 out.go:179] * Starting "kubernetes-upgrade-522575" primary control-plane node in "kubernetes-upgrade-522575" cluster
I0908 11:12:12.815762 2350325 cache.go:123] Beginning downloading kic base image for docker with containerd
I0908 11:12:12.819215 2350325 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
I0908 11:12:12.823000 2350325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
I0908 11:12:12.822969 2350325 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0908 11:12:12.823285 2350325 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
I0908 11:12:12.823293 2350325 cache.go:58] Caching tarball of preloaded images
I0908 11:12:12.823381 2350325 preload.go:172] Found /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0908 11:12:12.823389 2350325 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
I0908 11:12:12.823506 2350325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/config.json ...
I0908 11:12:12.872294 2350325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
I0908 11:12:12.872319 2350325 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
I0908 11:12:12.872332 2350325 cache.go:232] Successfully downloaded all kic artifacts
I0908 11:12:12.872355 2350325 start.go:360] acquireMachinesLock for kubernetes-upgrade-522575: {Name:mka975f17004f70d38f22c54afb907ccc9309842 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 11:12:12.872422 2350325 start.go:364] duration metric: took 36.504µs to acquireMachinesLock for "kubernetes-upgrade-522575"
I0908 11:12:12.872443 2350325 start.go:96] Skipping create...Using existing machine configuration
I0908 11:12:12.872448 2350325 fix.go:54] fixHost starting:
I0908 11:12:12.872705 2350325 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-522575 --format={{.State.Status}}
I0908 11:12:12.936873 2350325 fix.go:112] recreateIfNeeded on kubernetes-upgrade-522575: state=Running err=<nil>
W0908 11:12:12.936907 2350325 fix.go:138] unexpected machine state, will restart: <nil>
I0908 11:12:12.941228 2350325 out.go:252] * Updating the running docker "kubernetes-upgrade-522575" container ...
I0908 11:12:12.941277 2350325 machine.go:93] provisionDockerMachine start ...
I0908 11:12:12.941362 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:12.991203 2350325 main.go:141] libmachine: Using SSH client type: native
I0908 11:12:12.991529 2350325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil> [] 0s} 127.0.0.1 35933 <nil> <nil>}
I0908 11:12:12.991545 2350325 main.go:141] libmachine: About to run SSH command:
hostname
I0908 11:12:13.185214 2350325 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-522575
I0908 11:12:13.185237 2350325 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-522575"
I0908 11:12:13.185314 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:13.226259 2350325 main.go:141] libmachine: Using SSH client type: native
I0908 11:12:13.226590 2350325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil> [] 0s} 127.0.0.1 35933 <nil> <nil>}
I0908 11:12:13.226607 2350325 main.go:141] libmachine: About to run SSH command:
sudo hostname kubernetes-upgrade-522575 && echo "kubernetes-upgrade-522575" | sudo tee /etc/hostname
I0908 11:12:13.483519 2350325 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-522575
I0908 11:12:13.483640 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:13.529323 2350325 main.go:141] libmachine: Using SSH client type: native
I0908 11:12:13.529620 2350325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil> [] 0s} 127.0.0.1 35933 <nil> <nil>}
I0908 11:12:13.529642 2350325 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\skubernetes-upgrade-522575' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-522575/g' /etc/hosts;
else
echo '127.0.1.1 kubernetes-upgrade-522575' | sudo tee -a /etc/hosts;
fi
fi
I0908 11:12:13.752019 2350325 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0908 11:12:13.752044 2350325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21512-2177568/.minikube CaCertPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21512-2177568/.minikube}
I0908 11:12:13.752064 2350325 ubuntu.go:190] setting up certificates
I0908 11:12:13.752074 2350325 provision.go:84] configureAuth start
I0908 11:12:13.752133 2350325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-522575
I0908 11:12:13.786740 2350325 provision.go:143] copyHostCerts
I0908 11:12:13.786797 2350325 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-2177568/.minikube/key.pem, removing ...
I0908 11:12:13.786816 2350325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-2177568/.minikube/key.pem
I0908 11:12:13.786898 2350325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21512-2177568/.minikube/key.pem (1675 bytes)
I0908 11:12:13.786987 2350325 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-2177568/.minikube/ca.pem, removing ...
I0908 11:12:13.786992 2350325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-2177568/.minikube/ca.pem
I0908 11:12:13.787018 2350325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21512-2177568/.minikube/ca.pem (1082 bytes)
I0908 11:12:13.787072 2350325 exec_runner.go:144] found /home/jenkins/minikube-integration/21512-2177568/.minikube/cert.pem, removing ...
I0908 11:12:13.787076 2350325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21512-2177568/.minikube/cert.pem
I0908 11:12:13.787099 2350325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21512-2177568/.minikube/cert.pem (1123 bytes)
I0908 11:12:13.787141 2350325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21512-2177568/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21512-2177568/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-522575 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-522575 localhost minikube]
I0908 11:12:14.045999 2350325 provision.go:177] copyRemoteCerts
I0908 11:12:14.046229 2350325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0908 11:12:14.046301 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:14.085483 2350325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35933 SSHKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/kubernetes-upgrade-522575/id_rsa Username:docker}
I0908 11:12:14.192103 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0908 11:12:14.228210 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0908 11:12:14.264066 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0908 11:12:14.311927 2350325 provision.go:87] duration metric: took 559.831978ms to configureAuth
I0908 11:12:14.311961 2350325 ubuntu.go:206] setting minikube options for container-runtime
I0908 11:12:14.312139 2350325 config.go:182] Loaded profile config "kubernetes-upgrade-522575": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 11:12:14.312147 2350325 machine.go:96] duration metric: took 1.370863398s to provisionDockerMachine
I0908 11:12:14.312154 2350325 start.go:293] postStartSetup for "kubernetes-upgrade-522575" (driver="docker")
I0908 11:12:14.312263 2350325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0908 11:12:14.312355 2350325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0908 11:12:14.312449 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:14.342708 2350325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35933 SSHKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/kubernetes-upgrade-522575/id_rsa Username:docker}
I0908 11:12:14.437024 2350325 ssh_runner.go:195] Run: cat /etc/os-release
I0908 11:12:14.440987 2350325 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0908 11:12:14.441017 2350325 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0908 11:12:14.441028 2350325 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0908 11:12:14.441035 2350325 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0908 11:12:14.441045 2350325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-2177568/.minikube/addons for local assets ...
I0908 11:12:14.441100 2350325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21512-2177568/.minikube/files for local assets ...
I0908 11:12:14.441205 2350325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21512-2177568/.minikube/files/etc/ssl/certs/21794252.pem -> 21794252.pem in /etc/ssl/certs
I0908 11:12:14.441312 2350325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0908 11:12:14.451174 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/files/etc/ssl/certs/21794252.pem --> /etc/ssl/certs/21794252.pem (1708 bytes)
I0908 11:12:14.497403 2350325 start.go:296] duration metric: took 185.135112ms for postStartSetup
I0908 11:12:14.497491 2350325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0908 11:12:14.497539 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:14.526878 2350325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35933 SSHKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/kubernetes-upgrade-522575/id_rsa Username:docker}
I0908 11:12:14.633171 2350325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0908 11:12:14.639523 2350325 fix.go:56] duration metric: took 1.767060244s for fixHost
I0908 11:12:14.639547 2350325 start.go:83] releasing machines lock for "kubernetes-upgrade-522575", held for 1.76711607s
I0908 11:12:14.639613 2350325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-522575
I0908 11:12:14.671907 2350325 ssh_runner.go:195] Run: cat /version.json
I0908 11:12:14.671964 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:14.674463 2350325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0908 11:12:14.674540 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:14.698681 2350325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35933 SSHKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/kubernetes-upgrade-522575/id_rsa Username:docker}
I0908 11:12:14.719866 2350325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35933 SSHKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/kubernetes-upgrade-522575/id_rsa Username:docker}
I0908 11:12:14.953290 2350325 ssh_runner.go:195] Run: systemctl --version
I0908 11:12:14.957538 2350325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0908 11:12:14.962874 2350325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0908 11:12:14.991769 2350325 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0908 11:12:14.991858 2350325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0908 11:12:15.003695 2350325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0908 11:12:15.003728 2350325 start.go:495] detecting cgroup driver to use...
I0908 11:12:15.003797 2350325 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0908 11:12:15.003875 2350325 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0908 11:12:15.021866 2350325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0908 11:12:15.039812 2350325 docker.go:218] disabling cri-docker service (if available) ...
I0908 11:12:15.039880 2350325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0908 11:12:15.054567 2350325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0908 11:12:15.073324 2350325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0908 11:12:15.212642 2350325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0908 11:12:15.367514 2350325 docker.go:234] disabling docker service ...
I0908 11:12:15.367585 2350325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0908 11:12:15.383679 2350325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0908 11:12:15.396312 2350325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0908 11:12:15.552968 2350325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0908 11:12:15.670512 2350325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0908 11:12:15.685970 2350325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0908 11:12:15.705895 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I0908 11:12:15.717799 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0908 11:12:15.728625 2350325 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0908 11:12:15.728705 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0908 11:12:15.740590 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0908 11:12:15.752086 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0908 11:12:15.763465 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0908 11:12:15.776235 2350325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0908 11:12:15.790115 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0908 11:12:15.802464 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0908 11:12:15.812589 2350325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0908 11:12:15.823803 2350325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0908 11:12:15.837037 2350325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0908 11:12:15.849130 2350325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0908 11:12:15.977946 2350325 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0908 11:12:16.277153 2350325 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0908 11:12:16.277232 2350325 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0908 11:12:16.291100 2350325 start.go:563] Will wait 60s for crictl version
I0908 11:12:16.291223 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:16.295047 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0908 11:12:16.361574 2350325 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0908 11:12:16.361694 2350325 ssh_runner.go:195] Run: containerd --version
I0908 11:12:16.400804 2350325 ssh_runner.go:195] Run: containerd --version
I0908 11:12:16.452475 2350325 out.go:179] * Preparing Kubernetes v1.34.0 on containerd 1.7.27 ...
I0908 11:12:16.455401 2350325 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-522575 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 11:12:16.475313 2350325 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0908 11:12:16.479619 2350325 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-522575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-522575 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0908 11:12:16.479745 2350325 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0908 11:12:16.479805 2350325 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 11:12:16.534358 2350325 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-proxy:v1.34.0". assuming images are not preloaded.
I0908 11:12:16.534426 2350325 ssh_runner.go:195] Run: which lz4
I0908 11:12:16.538619 2350325 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0908 11:12:16.543625 2350325 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
I0908 11:12:16.543644 2350325 containerd.go:563] duration metric: took 5.08538ms to copy over tarball
I0908 11:12:16.543707 2350325 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0908 11:12:22.637429 2350325 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.093699027s)
I0908 11:12:22.637505 2350325 kubeadm.go:901] preload failed, will try to load cached images: extracting tarball:
** stderr **
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
tar: Exiting with failure status due to previous errors
** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
stdout:
stderr:
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
tar: Exiting with failure status due to previous errors
I0908 11:12:22.637587 2350325 ssh_runner.go:195] Run: sudo crictl images --output json
I0908 11:12:22.694775 2350325 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-proxy:v1.34.0". assuming images are not preloaded.
I0908 11:12:22.694798 2350325 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.0 registry.k8s.io/kube-controller-manager:v1.34.0 registry.k8s.io/kube-scheduler:v1.34.0 registry.k8s.io/kube-proxy:v1.34.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
I0908 11:12:22.694845 2350325 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0908 11:12:22.695066 2350325 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.0
I0908 11:12:22.695166 2350325 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.0
I0908 11:12:22.695250 2350325 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.0
I0908 11:12:22.695337 2350325 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.0
I0908 11:12:22.695411 2350325 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
I0908 11:12:22.695488 2350325 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
I0908 11:12:22.695578 2350325 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
I0908 11:12:22.698587 2350325 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
I0908 11:12:22.698951 2350325 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.0
I0908 11:12:22.699085 2350325 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.0
I0908 11:12:22.699200 2350325 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.0
I0908 11:12:22.699310 2350325 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0908 11:12:22.699514 2350325 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.0
I0908 11:12:22.699634 2350325 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
I0908 11:12:22.699857 2350325 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
I0908 11:12:23.016889 2350325 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.0" and sha "6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf"
I0908 11:12:23.017112 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.0
I0908 11:12:23.038802 2350325 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc"
I0908 11:12:23.038876 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
I0908 11:12:23.059421 2350325 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
I0908 11:12:23.059494 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
I0908 11:12:23.059960 2350325 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.0" and sha "996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570"
I0908 11:12:23.060007 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.0
I0908 11:12:23.071466 2350325 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.0" and sha "d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be"
I0908 11:12:23.071549 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.0
I0908 11:12:23.074012 2350325 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e"
I0908 11:12:23.074070 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
I0908 11:12:23.078756 2350325 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.0" and sha "a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee"
I0908 11:12:23.078817 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.0
I0908 11:12:23.290713 2350325 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.0" needs transfer: "registry.k8s.io/kube-proxy:v1.34.0" does not exist at hash "6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf" in container runtime
I0908 11:12:23.290760 2350325 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.0
I0908 11:12:23.290808 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:23.290904 2350325 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
I0908 11:12:23.290929 2350325 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
I0908 11:12:23.290952 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:23.291033 2350325 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
I0908 11:12:23.291051 2350325 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
I0908 11:12:23.291072 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:23.291143 2350325 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.0" does not exist at hash "996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570" in container runtime
I0908 11:12:23.291161 2350325 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.0
I0908 11:12:23.291184 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:23.291262 2350325 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.0" does not exist at hash "d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be" in container runtime
I0908 11:12:23.291288 2350325 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.0
I0908 11:12:23.291309 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:23.291410 2350325 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
I0908 11:12:23.291428 2350325 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
I0908 11:12:23.291450 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:23.307405 2350325 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.0" does not exist at hash "a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee" in container runtime
I0908 11:12:23.307449 2350325 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.0
I0908 11:12:23.307499 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:23.311784 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
I0908 11:12:23.311859 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I0908 11:12:23.311905 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
I0908 11:12:23.311946 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.0
I0908 11:12:23.311991 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I0908 11:12:23.314784 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.0
I0908 11:12:23.328916 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.0
I0908 11:12:23.733307 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I0908 11:12:23.733468 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
I0908 11:12:23.733572 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I0908 11:12:23.733637 2350325 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
I0908 11:12:23.733700 2350325 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0
I0908 11:12:23.733764 2350325 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0
I0908 11:12:23.733802 2350325 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0
I0908 11:12:23.872007 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I0908 11:12:23.872171 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I0908 11:12:23.872289 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.0
I0908 11:12:23.997912 2350325 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0
I0908 11:12:23.998055 2350325 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
I0908 11:12:23.998119 2350325 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
W0908 11:12:24.022673 2350325 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
I0908 11:12:24.022924 2350325 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
I0908 11:12:24.023029 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
I0908 11:12:24.072321 2350325 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
I0908 11:12:24.072416 2350325 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0908 11:12:24.072499 2350325 ssh_runner.go:195] Run: which crictl
I0908 11:12:24.077099 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0908 11:12:24.128854 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0908 11:12:24.175899 2350325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0908 11:12:24.225159 2350325 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
I0908 11:12:24.225408 2350325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I0908 11:12:24.229806 2350325 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I0908 11:12:24.229884 2350325 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0908 11:12:24.229960 2350325 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0908 11:12:24.475750 2350325 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0908 11:12:24.475858 2350325 cache_images.go:93] duration metric: took 1.781045582s to LoadCachedImages
W0908 11:12:24.475974 2350325 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1: no such file or directory
X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1: no such file or directory
I0908 11:12:24.476016 2350325 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 containerd true true} ...
I0908 11:12:24.476251 2350325 kubeadm.go:938] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-522575 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-522575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0908 11:12:24.476345 2350325 ssh_runner.go:195] Run: sudo crictl info
I0908 11:12:24.527580 2350325 cni.go:84] Creating CNI manager for ""
I0908 11:12:24.527654 2350325 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0908 11:12:24.527678 2350325 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0908 11:12:24.527730 2350325 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-522575 NodeName:kubernetes-upgrade-522575 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0908 11:12:24.527888 2350325 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "kubernetes-upgrade-522575"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0908 11:12:24.527989 2350325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
I0908 11:12:24.540315 2350325 binaries.go:44] Found k8s binaries, skipping transfer
I0908 11:12:24.540429 2350325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0908 11:12:24.549869 2350325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I0908 11:12:24.571105 2350325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0908 11:12:24.591925 2350325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2238 bytes)
I0908 11:12:24.613040 2350325 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0908 11:12:24.617549 2350325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0908 11:12:24.754503 2350325 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0908 11:12:24.769738 2350325 certs.go:68] Setting up /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575 for IP: 192.168.76.2
I0908 11:12:24.769831 2350325 certs.go:194] generating shared ca certs ...
I0908 11:12:24.769863 2350325 certs.go:226] acquiring lock for ca certs: {Name:mk1bcace57bdfe3c9a30a15436c57c3e3c666591 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0908 11:12:24.770066 2350325 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/ca.key
I0908 11:12:24.770157 2350325 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/proxy-client-ca.key
I0908 11:12:24.770185 2350325 certs.go:256] generating profile certs ...
I0908 11:12:24.770312 2350325 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/client.key
I0908 11:12:24.770484 2350325 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/apiserver.key.0483f3e7
I0908 11:12:24.770582 2350325 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/proxy-client.key
I0908 11:12:24.770741 2350325 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/2179425.pem (1338 bytes)
W0908 11:12:24.770808 2350325 certs.go:480] ignoring /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/2179425_empty.pem, impossibly tiny 0 bytes
I0908 11:12:24.770836 2350325 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/ca-key.pem (1679 bytes)
I0908 11:12:24.770894 2350325 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/ca.pem (1082 bytes)
I0908 11:12:24.770941 2350325 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/cert.pem (1123 bytes)
I0908 11:12:24.770993 2350325 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/key.pem (1675 bytes)
I0908 11:12:24.771075 2350325 certs.go:484] found cert: /home/jenkins/minikube-integration/21512-2177568/.minikube/files/etc/ssl/certs/21794252.pem (1708 bytes)
I0908 11:12:24.771991 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0908 11:12:24.797398 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0908 11:12:24.820884 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0908 11:12:24.844749 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0908 11:12:24.870108 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I0908 11:12:24.893306 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0908 11:12:24.917542 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0908 11:12:24.945272 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0908 11:12:24.981631 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/files/etc/ssl/certs/21794252.pem --> /usr/share/ca-certificates/21794252.pem (1708 bytes)
I0908 11:12:25.007617 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0908 11:12:25.040118 2350325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21512-2177568/.minikube/certs/2179425.pem --> /usr/share/ca-certificates/2179425.pem (1338 bytes)
I0908 11:12:25.066392 2350325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0908 11:12:25.087426 2350325 ssh_runner.go:195] Run: openssl version
I0908 11:12:25.094123 2350325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0908 11:12:25.106519 2350325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0908 11:12:25.110221 2350325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 8 10:34 /usr/share/ca-certificates/minikubeCA.pem
I0908 11:12:25.110330 2350325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0908 11:12:25.117515 2350325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0908 11:12:25.127364 2350325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2179425.pem && ln -fs /usr/share/ca-certificates/2179425.pem /etc/ssl/certs/2179425.pem"
I0908 11:12:25.137781 2350325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2179425.pem
I0908 11:12:25.141426 2350325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 8 10:41 /usr/share/ca-certificates/2179425.pem
I0908 11:12:25.141536 2350325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2179425.pem
I0908 11:12:25.148873 2350325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2179425.pem /etc/ssl/certs/51391683.0"
I0908 11:12:25.159305 2350325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21794252.pem && ln -fs /usr/share/ca-certificates/21794252.pem /etc/ssl/certs/21794252.pem"
I0908 11:12:25.169883 2350325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21794252.pem
I0908 11:12:25.173877 2350325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 8 10:41 /usr/share/ca-certificates/21794252.pem
I0908 11:12:25.173963 2350325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21794252.pem
I0908 11:12:25.181239 2350325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21794252.pem /etc/ssl/certs/3ec20f2e.0"
I0908 11:12:25.190728 2350325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0908 11:12:25.194360 2350325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0908 11:12:25.201151 2350325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0908 11:12:25.208288 2350325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0908 11:12:25.215444 2350325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0908 11:12:25.222782 2350325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0908 11:12:25.229913 2350325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0908 11:12:25.237002 2350325 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-522575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-522575 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 11:12:25.237086 2350325 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0908 11:12:25.237165 2350325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0908 11:12:25.284243 2350325 cri.go:89] found id: "3564d752930c09cb758835a6a6c488cfc72a6779f86283a5620159fa40eae9d1"
I0908 11:12:25.284263 2350325 cri.go:89] found id: ""
I0908 11:12:25.284342 2350325 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0908 11:12:25.306415 2350325 cri.go:116] JSON = [{"ociVersion":"1.2.0","id":"013a491f30ae84a7c89bd4dd53bbe8a5f1ccbccabf4c73a6a0bfc0c6bb24436f","pid":1267,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/013a491f30ae84a7c89bd4dd53bbe8a5f1ccbccabf4c73a6a0bfc0c6bb24436f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/013a491f30ae84a7c89bd4dd53bbe8a5f1ccbccabf4c73a6a0bfc0c6bb24436f/rootfs","created":"2025-09-08T11:11:56.033543758Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"013a491f30ae84a7c89bd4dd53bbe8a5f1ccbccabf4c73a6a0bfc0c6bb24436f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-522575_9d5e16346e954f3f07640c8beb47c2bb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-kubernet
es-upgrade-522575","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9d5e16346e954f3f07640c8beb47c2bb"},"owner":"root"},{"ociVersion":"1.2.0","id":"04ee454ca2428d938ac7ca4b064a6f8a5b001330a3eb580ac0030a83dd557a2f","pid":1532,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/04ee454ca2428d938ac7ca4b064a6f8a5b001330a3eb580ac0030a83dd557a2f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/04ee454ca2428d938ac7ca4b064a6f8a5b001330a3eb580ac0030a83dd557a2f/rootfs","created":"2025-09-08T11:12:04.343930203Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri.sandbox-id":"72520f6ab0bbb01a34d1e5d4ee44947403262f46d9a21fc701d0f74a03936b7d","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-522575","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbo
x-uid":"63e04d7d76b2f53f80bb3e9b18a6cb69"},"owner":"root"},{"ociVersion":"1.2.0","id":"1abd645d14518fef8410afada6b225d67ab28fdf1ac989480eef5c62dbcab61f","pid":1430,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1abd645d14518fef8410afada6b225d67ab28fdf1ac989480eef5c62dbcab61f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1abd645d14518fef8410afada6b225d67ab28fdf1ac989480eef5c62dbcab61f/rootfs","created":"2025-09-08T11:11:58.360477888Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri.sandbox-id":"3917f480f85eeb0a19e79782467f0a7910ada641e24be804d6a9785ef71a2f68","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-522575","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2c2c99a4ae60ea78c90f18f48a9c46e0"},"owner":"root"},{"ociVersi
on":"1.2.0","id":"3917f480f85eeb0a19e79782467f0a7910ada641e24be804d6a9785ef71a2f68","pid":1240,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3917f480f85eeb0a19e79782467f0a7910ada641e24be804d6a9785ef71a2f68","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3917f480f85eeb0a19e79782467f0a7910ada641e24be804d6a9785ef71a2f68/rootfs","created":"2025-09-08T11:11:55.959994043Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"3917f480f85eeb0a19e79782467f0a7910ada641e24be804d6a9785ef71a2f68","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-522575_2c2c99a4ae60ea78c90f18f48a9c46e0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-kubernetes-upgrade-522575","io.kubernetes.cri.sandbox-names
pace":"kube-system","io.kubernetes.cri.sandbox-uid":"2c2c99a4ae60ea78c90f18f48a9c46e0"},"owner":"root"},{"ociVersion":"1.2.0","id":"72520f6ab0bbb01a34d1e5d4ee44947403262f46d9a21fc701d0f74a03936b7d","pid":1290,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72520f6ab0bbb01a34d1e5d4ee44947403262f46d9a21fc701d0f74a03936b7d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/72520f6ab0bbb01a34d1e5d4ee44947403262f46d9a21fc701d0f74a03936b7d/rootfs","created":"2025-09-08T11:11:56.144634517Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"72520f6ab0bbb01a34d1e5d4ee44947403262f46d9a21fc701d0f74a03936b7d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-522575_63e04d7d76b2f53f80bb3e9b18a6cb69","io.kubernetes.cri.sandbox-memory":"0","io.kubernet
es.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-522575","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"63e04d7d76b2f53f80bb3e9b18a6cb69"},"owner":"root"},{"ociVersion":"1.2.0","id":"7a1acdf3ecaa45fe3fa037b7b1aa86ab10084f0cc5d489254436954ac688a459","pid":2034,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1acdf3ecaa45fe3fa037b7b1aa86ab10084f0cc5d489254436954ac688a459","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a1acdf3ecaa45fe3fa037b7b1aa86ab10084f0cc5d489254436954ac688a459/rootfs","created":"2025-09-08T11:12:16.954538445Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"7a1acdf3ecaa45fe3fa037b7b1aa86ab10084f0cc5d489254436954ac688a459","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgr
ade-522575_63e04d7d76b2f53f80bb3e9b18a6cb69","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-kubernetes-upgrade-522575","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"63e04d7d76b2f53f80bb3e9b18a6cb69"},"owner":"root"},{"ociVersion":"1.2.0","id":"ad776badb2c0cc67d16c294b4b4838918de74cfa33b89b1918cb2f6bab7fa9b0","pid":1484,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad776badb2c0cc67d16c294b4b4838918de74cfa33b89b1918cb2f6bab7fa9b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad776badb2c0cc67d16c294b4b4838918de74cfa33b89b1918cb2f6bab7fa9b0/rootfs","created":"2025-09-08T11:12:02.430324015Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri.sandbox-id":"013a491f30ae84a7c89bd4dd53bbe8a5f1ccbccabf4c73a6a0bfc0c6bb24436f","io.kubernet
es.cri.sandbox-name":"kube-apiserver-kubernetes-upgrade-522575","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9d5e16346e954f3f07640c8beb47c2bb"},"owner":"root"}]
I0908 11:12:25.306612 2350325 cri.go:126] list returned 7 containers
I0908 11:12:25.306627 2350325 cri.go:129] container: {ID:013a491f30ae84a7c89bd4dd53bbe8a5f1ccbccabf4c73a6a0bfc0c6bb24436f Status:running}
I0908 11:12:25.306649 2350325 cri.go:131] skipping 013a491f30ae84a7c89bd4dd53bbe8a5f1ccbccabf4c73a6a0bfc0c6bb24436f - not in ps
I0908 11:12:25.306657 2350325 cri.go:129] container: {ID:04ee454ca2428d938ac7ca4b064a6f8a5b001330a3eb580ac0030a83dd557a2f Status:running}
I0908 11:12:25.306663 2350325 cri.go:131] skipping 04ee454ca2428d938ac7ca4b064a6f8a5b001330a3eb580ac0030a83dd557a2f - not in ps
I0908 11:12:25.306667 2350325 cri.go:129] container: {ID:1abd645d14518fef8410afada6b225d67ab28fdf1ac989480eef5c62dbcab61f Status:running}
I0908 11:12:25.306673 2350325 cri.go:131] skipping 1abd645d14518fef8410afada6b225d67ab28fdf1ac989480eef5c62dbcab61f - not in ps
I0908 11:12:25.306677 2350325 cri.go:129] container: {ID:3917f480f85eeb0a19e79782467f0a7910ada641e24be804d6a9785ef71a2f68 Status:running}
I0908 11:12:25.306685 2350325 cri.go:131] skipping 3917f480f85eeb0a19e79782467f0a7910ada641e24be804d6a9785ef71a2f68 - not in ps
I0908 11:12:25.306689 2350325 cri.go:129] container: {ID:72520f6ab0bbb01a34d1e5d4ee44947403262f46d9a21fc701d0f74a03936b7d Status:running}
I0908 11:12:25.306696 2350325 cri.go:131] skipping 72520f6ab0bbb01a34d1e5d4ee44947403262f46d9a21fc701d0f74a03936b7d - not in ps
I0908 11:12:25.306701 2350325 cri.go:129] container: {ID:7a1acdf3ecaa45fe3fa037b7b1aa86ab10084f0cc5d489254436954ac688a459 Status:running}
I0908 11:12:25.306707 2350325 cri.go:131] skipping 7a1acdf3ecaa45fe3fa037b7b1aa86ab10084f0cc5d489254436954ac688a459 - not in ps
I0908 11:12:25.306711 2350325 cri.go:129] container: {ID:ad776badb2c0cc67d16c294b4b4838918de74cfa33b89b1918cb2f6bab7fa9b0 Status:running}
I0908 11:12:25.306718 2350325 cri.go:131] skipping ad776badb2c0cc67d16c294b4b4838918de74cfa33b89b1918cb2f6bab7fa9b0 - not in ps
I0908 11:12:25.306773 2350325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0908 11:12:25.316866 2350325 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0908 11:12:25.316936 2350325 kubeadm.go:589] restartPrimaryControlPlane start ...
I0908 11:12:25.317014 2350325 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0908 11:12:25.326458 2350325 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0908 11:12:25.327082 2350325 kubeconfig.go:125] found "kubernetes-upgrade-522575" server: "https://192.168.76.2:8443"
I0908 11:12:25.327789 2350325 kapi.go:59] client config for kubernetes-upgrade-522575: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/client.crt", KeyFile:"/home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/client.key", CAFile:"/home/jenkins/minikube-integration/21512-2177568/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2d7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0908 11:12:25.328633 2350325 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0908 11:12:25.328744 2350325 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0908 11:12:25.328782 2350325 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0908 11:12:25.328816 2350325 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0908 11:12:25.328838 2350325 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0908 11:12:25.329169 2350325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0908 11:12:25.341003 2350325 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
I0908 11:12:25.341087 2350325 kubeadm.go:593] duration metric: took 24.13105ms to restartPrimaryControlPlane
I0908 11:12:25.341110 2350325 kubeadm.go:394] duration metric: took 104.116761ms to StartCluster
I0908 11:12:25.341157 2350325 settings.go:142] acquiring lock: {Name:mk77e04f4141ed9ed7c4254bc37ebf622d3e3ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0908 11:12:25.341263 2350325 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21512-2177568/kubeconfig
I0908 11:12:25.341976 2350325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-2177568/kubeconfig: {Name:mk8d533c5c3c43ec96bc546513e47b0970d73b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0908 11:12:25.342255 2350325 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0908 11:12:25.342735 2350325 config.go:182] Loaded profile config "kubernetes-upgrade-522575": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 11:12:25.342723 2350325 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0908 11:12:25.342805 2350325 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-522575"
I0908 11:12:25.342822 2350325 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-522575"
W0908 11:12:25.342833 2350325 addons.go:247] addon storage-provisioner should already be in state true
I0908 11:12:25.342862 2350325 host.go:66] Checking if "kubernetes-upgrade-522575" exists ...
I0908 11:12:25.343032 2350325 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-522575"
I0908 11:12:25.343122 2350325 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-522575"
I0908 11:12:25.343305 2350325 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-522575 --format={{.State.Status}}
I0908 11:12:25.343629 2350325 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-522575 --format={{.State.Status}}
I0908 11:12:25.347785 2350325 out.go:179] * Verifying Kubernetes components...
I0908 11:12:25.350889 2350325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0908 11:12:25.383587 2350325 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0908 11:12:25.385394 2350325 kapi.go:59] client config for kubernetes-upgrade-522575: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/client.crt", KeyFile:"/home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/kubernetes-upgrade-522575/client.key", CAFile:"/home/jenkins/minikube-integration/21512-2177568/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2d7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0908 11:12:25.385681 2350325 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-522575"
W0908 11:12:25.385691 2350325 addons.go:247] addon default-storageclass should already be in state true
I0908 11:12:25.385717 2350325 host.go:66] Checking if "kubernetes-upgrade-522575" exists ...
I0908 11:12:25.386153 2350325 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-522575 --format={{.State.Status}}
I0908 11:12:25.386669 2350325 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0908 11:12:25.386691 2350325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0908 11:12:25.386736 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:25.437742 2350325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35933 SSHKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/kubernetes-upgrade-522575/id_rsa Username:docker}
I0908 11:12:25.439725 2350325 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0908 11:12:25.439743 2350325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0908 11:12:25.439801 2350325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-522575
I0908 11:12:25.470854 2350325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35933 SSHKeyPath:/home/jenkins/minikube-integration/21512-2177568/.minikube/machines/kubernetes-upgrade-522575/id_rsa Username:docker}
I0908 11:12:25.558426 2350325 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0908 11:12:25.591696 2350325 api_server.go:52] waiting for apiserver process to appear ...
I0908 11:12:25.591812 2350325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0908 11:12:25.605746 2350325 api_server.go:72] duration metric: took 263.425703ms to wait for apiserver process to appear ...
I0908 11:12:25.605772 2350325 api_server.go:88] waiting for apiserver healthz status ...
I0908 11:12:25.605815 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:25.635609 2350325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0908 11:12:25.701601 2350325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0908 11:12:27.616616 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:27.616688 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:27.616709 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:29.625874 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:29.625948 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:29.625976 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:31.634846 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:31.634878 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:31.634902 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:33.644766 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:33.644869 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:33.644931 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:35.656007 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:35.656037 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:35.656055 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:37.664872 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:37.664915 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:37.664930 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:39.674465 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:39.674489 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:39.674504 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:41.683846 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:41.683878 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:41.683893 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:43.694148 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:43.694178 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:43.694193 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:45.704268 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:45.704299 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:45.704314 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:47.714306 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:47.714358 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:47.714415 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:49.730312 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:49.730430 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:49.730446 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:51.747075 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:51.747110 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:51.747125 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:51.755213 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:51.755245 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:52.106688 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:52.115224 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0908 11:12:52.115268 2350325 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0908 11:12:52.606869 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:57.608121 2350325 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0908 11:12:57.608163 2350325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0908 11:12:58.434006 2350325 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0908 11:12:58.448666 2350325 api_server.go:141] control plane version: v1.34.0
I0908 11:12:58.448696 2350325 api_server.go:131] duration metric: took 32.84291717s to wait for apiserver health ...
I0908 11:12:58.448705 2350325 system_pods.go:43] waiting for kube-system pods to appear ...
I0908 11:13:20.442619 2350325 system_pods.go:59] 5 kube-system pods found
I0908 11:13:20.442651 2350325 system_pods.go:61] "etcd-kubernetes-upgrade-522575" [097d5f8e-86af-49b9-ac78-995b975fa4b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0908 11:13:20.442658 2350325 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-522575" [1fc4c3ac-cf3d-4e67-b875-21310ad614b8] Pending
I0908 11:13:20.442666 2350325 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-522575" [d8f7f9b2-ccdd-4f59-81d7-addf0c2a10d2] Pending
I0908 11:13:20.442671 2350325 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-522575" [39d022fd-a189-43b6-a92c-3dc13cbd1298] Pending
I0908 11:13:20.442676 2350325 system_pods.go:61] "storage-provisioner" [899f0e68-8715-4577-b97e-e0c459c0e8aa] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
I0908 11:13:20.442683 2350325 system_pods.go:74] duration metric: took 21.993971569s to wait for pod list to return data ...
I0908 11:13:20.442695 2350325 kubeadm.go:578] duration metric: took 55.10038187s to wait for: map[apiserver:true system_pods:true]
I0908 11:13:20.442709 2350325 node_conditions.go:102] verifying NodePressure condition ...
I0908 11:14:20.446445 2350325 retry.go:31] will retry after 537.427126ms: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
I0908 11:15:20.985104 2350325 node_conditions.go:105] duration metric: took 2m0.5423856s to run NodePressure ...
I0908 11:15:20.988531 2350325 out.go:203]
W0908 11:15:20.992010 2350325 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: node pressure: list nodes retry: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: node pressure: list nodes retry: the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
W0908 11:15:20.992031 2350325 out.go:285] *
*
W0908 11:15:20.994245 2350325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0908 11:15:20.997024 2350325 out.go:203]
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-arm64 start -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-09-08 11:15:21.075055884 +0000 UTC m=+2501.859926417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect kubernetes-upgrade-522575
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-522575:
-- stdout --
[
{
"Id": "9e1ec926d0b3be6003cbc2603275903a00bef70ddf982ab9931a6879a64b6458",
"Created": "2025-09-08T11:11:06.625498217Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2347602,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-09-08T11:11:38.153652978Z",
"FinishedAt": "2025-09-08T11:11:37.153416467Z"
},
"Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
"ResolvConfPath": "/var/lib/docker/containers/9e1ec926d0b3be6003cbc2603275903a00bef70ddf982ab9931a6879a64b6458/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/9e1ec926d0b3be6003cbc2603275903a00bef70ddf982ab9931a6879a64b6458/hostname",
"HostsPath": "/var/lib/docker/containers/9e1ec926d0b3be6003cbc2603275903a00bef70ddf982ab9931a6879a64b6458/hosts",
"LogPath": "/var/lib/docker/containers/9e1ec926d0b3be6003cbc2603275903a00bef70ddf982ab9931a6879a64b6458/9e1ec926d0b3be6003cbc2603275903a00bef70ddf982ab9931a6879a64b6458-json.log",
"Name": "/kubernetes-upgrade-522575",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"kubernetes-upgrade-522575:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "kubernetes-upgrade-522575",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "9e1ec926d0b3be6003cbc2603275903a00bef70ddf982ab9931a6879a64b6458",
"LowerDir": "/var/lib/docker/overlay2/d3ba25c2c838cb3d97888900c6193e0c809cd68e322b5c042d6e908c33913389-init/diff:/var/lib/docker/overlay2/b1a66a65c97e17d5d253f20a10b277d455497c2156f2605acca5ea8c28a71db0/diff",
"MergedDir": "/var/lib/docker/overlay2/d3ba25c2c838cb3d97888900c6193e0c809cd68e322b5c042d6e908c33913389/merged",
"UpperDir": "/var/lib/docker/overlay2/d3ba25c2c838cb3d97888900c6193e0c809cd68e322b5c042d6e908c33913389/diff",
"WorkDir": "/var/lib/docker/overlay2/d3ba25c2c838cb3d97888900c6193e0c809cd68e322b5c042d6e908c33913389/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "kubernetes-upgrade-522575",
"Source": "/var/lib/docker/volumes/kubernetes-upgrade-522575/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "kubernetes-upgrade-522575",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "kubernetes-upgrade-522575",
"name.minikube.sigs.k8s.io": "kubernetes-upgrade-522575",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6e7139ed5ceff7930558a004be77e6214e1bd0f3a561eb8d3fbd32cc11d76eb2",
"SandboxKey": "/var/run/docker/netns/6e7139ed5cef",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35933"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35934"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35937"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35935"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35936"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"kubernetes-upgrade-522575": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ba:5a:54:96:25:23",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "e5948a652fcc9b972830cbdfa0d3c9f3bdea238a57e72b18ad4981d9431a41d0",
"EndpointID": "2f24971d77740896231af4acd4beb7fdbf85f7dde3e4f1d5c7e3c63287830e46",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"kubernetes-upgrade-522575",
"9e1ec926d0b3"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-522575 -n kubernetes-upgrade-522575
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-522575 -n kubernetes-upgrade-522575: exit status 2 (14.280128963s)
-- stdout --
Running
-- /stdout --
** stderr **
E0908 11:15:35.374942 2364971 status.go:466] Error apiserver status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[-]log failed: reason withheld
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
** /stderr **
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p kubernetes-upgrade-522575 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-522575 logs -n 25: (1m1.544701039s)
helpers_test.go:260: TestKubernetesUpgrade logs:
-- stdout --
==> Audit <==
┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ stop │ -p NoKubernetes-902198 │ NoKubernetes-902198 │ jenkins │ v1.36.0 │ 08 Sep 25 11:10 UTC │ 08 Sep 25 11:10 UTC │
│ start │ -p NoKubernetes-902198 --driver=docker --container-runtime=containerd │ NoKubernetes-902198 │ jenkins │ v1.36.0 │ 08 Sep 25 11:10 UTC │ 08 Sep 25 11:10 UTC │
│ ssh │ -p NoKubernetes-902198 sudo systemctl is-active --quiet service kubelet │ NoKubernetes-902198 │ jenkins │ v1.36.0 │ 08 Sep 25 11:10 UTC │ │
│ delete │ -p NoKubernetes-902198 │ NoKubernetes-902198 │ jenkins │ v1.36.0 │ 08 Sep 25 11:10 UTC │ 08 Sep 25 11:10 UTC │
│ start │ -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ kubernetes-upgrade-522575 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ stop │ -p kubernetes-upgrade-522575 │ kubernetes-upgrade-522575 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:11 UTC │
│ start │ -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ kubernetes-upgrade-522575 │ jenkins │ v1.36.0 │ 08 Sep 25 11:11 UTC │ 08 Sep 25 11:12 UTC │
│ delete │ -p missing-upgrade-606936 │ missing-upgrade-606936 │ jenkins │ v1.36.0 │ 08 Sep 25 11:12 UTC │ 08 Sep 25 11:12 UTC │
│ start │ -p stopped-upgrade-032750 --memory=3072 --vm-driver=docker --container-runtime=containerd │ stopped-upgrade-032750 │ jenkins │ v1.32.0 │ 08 Sep 25 11:12 UTC │ 08 Sep 25 11:12 UTC │
│ start │ -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker --container-runtime=containerd │ kubernetes-upgrade-522575 │ jenkins │ v1.36.0 │ 08 Sep 25 11:12 UTC │ │
│ start │ -p kubernetes-upgrade-522575 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ kubernetes-upgrade-522575 │ jenkins │ v1.36.0 │ 08 Sep 25 11:12 UTC │ │
│ stop │ stopped-upgrade-032750 stop │ stopped-upgrade-032750 │ jenkins │ v1.32.0 │ 08 Sep 25 11:12 UTC │ 08 Sep 25 11:12 UTC │
│ start │ -p stopped-upgrade-032750 --memory=3072 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ stopped-upgrade-032750 │ jenkins │ v1.36.0 │ 08 Sep 25 11:12 UTC │ 08 Sep 25 11:13 UTC │
│ delete │ -p stopped-upgrade-032750 │ stopped-upgrade-032750 │ jenkins │ v1.36.0 │ 08 Sep 25 11:13 UTC │ 08 Sep 25 11:13 UTC │
│ start │ -p running-upgrade-188219 --memory=3072 --vm-driver=docker --container-runtime=containerd │ running-upgrade-188219 │ jenkins │ v1.32.0 │ 08 Sep 25 11:13 UTC │ 08 Sep 25 11:13 UTC │
│ start │ -p running-upgrade-188219 --memory=3072 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ running-upgrade-188219 │ jenkins │ v1.36.0 │ 08 Sep 25 11:13 UTC │ 08 Sep 25 11:14 UTC │
│ delete │ -p running-upgrade-188219 │ running-upgrade-188219 │ jenkins │ v1.36.0 │ 08 Sep 25 11:14 UTC │ 08 Sep 25 11:14 UTC │
│ start │ -p pause-349306 --memory=3072 --install-addons=false --wait=all --driver=docker --container-runtime=containerd │ pause-349306 │ jenkins │ v1.36.0 │ 08 Sep 25 11:14 UTC │ 08 Sep 25 11:15 UTC │
│ start │ -p pause-349306 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ pause-349306 │ jenkins │ v1.36.0 │ 08 Sep 25 11:15 UTC │ 08 Sep 25 11:15 UTC │
│ pause │ -p pause-349306 --alsologtostderr -v=5 │ pause-349306 │ jenkins │ v1.36.0 │ 08 Sep 25 11:15 UTC │ 08 Sep 25 11:15 UTC │
│ unpause │ -p pause-349306 --alsologtostderr -v=5 │ pause-349306 │ jenkins │ v1.36.0 │ 08 Sep 25 11:15 UTC │ 08 Sep 25 11:15 UTC │
│ pause │ -p pause-349306 --alsologtostderr -v=5 │ pause-349306 │ jenkins │ v1.36.0 │ 08 Sep 25 11:15 UTC │ 08 Sep 25 11:15 UTC │
│ delete │ -p pause-349306 --alsologtostderr -v=5 │ pause-349306 │ jenkins │ v1.36.0 │ 08 Sep 25 11:15 UTC │ 08 Sep 25 11:15 UTC │
│ delete │ -p pause-349306 │ pause-349306 │ jenkins │ v1.36.0 │ 08 Sep 25 11:15 UTC │ 08 Sep 25 11:15 UTC │
│ start │ -p force-systemd-flag-544610 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker --container-runtime=containerd │ force-systemd-flag-544610 │ jenkins │ v1.36.0 │ 08 Sep 25 11:15 UTC │ │
└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/09/08 11:15:33
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.24.6 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0908 11:15:33.280432 2365166 out.go:360] Setting OutFile to fd 1 ...
I0908 11:15:33.280551 2365166 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:15:33.280563 2365166 out.go:374] Setting ErrFile to fd 2...
I0908 11:15:33.280568 2365166 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 11:15:33.280839 2365166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21512-2177568/.minikube/bin
I0908 11:15:33.281250 2365166 out.go:368] Setting JSON to false
I0908 11:15:33.282210 2365166 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":61085,"bootTime":1757269048,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0908 11:15:33.282279 2365166 start.go:140] virtualization:
I0908 11:15:33.285680 2365166 out.go:179] * [force-systemd-flag-544610] minikube v1.36.0 on Ubuntu 20.04 (arm64)
I0908 11:15:33.289615 2365166 out.go:179] - MINIKUBE_LOCATION=21512
I0908 11:15:33.289690 2365166 notify.go:220] Checking for updates...
I0908 11:15:33.296026 2365166 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0908 11:15:33.298869 2365166 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21512-2177568/kubeconfig
I0908 11:15:33.301808 2365166 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21512-2177568/.minikube
I0908 11:15:33.304900 2365166 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I0908 11:15:33.307833 2365166 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I0908 11:15:33.311079 2365166 config.go:182] Loaded profile config "kubernetes-upgrade-522575": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.0
I0908 11:15:33.311193 2365166 driver.go:421] Setting default libvirt URI to qemu:///system
I0908 11:15:33.342444 2365166 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
I0908 11:15:33.342564 2365166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 11:15:33.398775 2365166 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 11:15:33.389349283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 11:15:33.398887 2365166 docker.go:318] overlay module found
I0908 11:15:33.402025 2365166 out.go:179] * Using the docker driver based on user configuration
I0908 11:15:33.404850 2365166 start.go:304] selected driver: docker
I0908 11:15:33.404868 2365166 start.go:918] validating driver "docker" against <nil>
I0908 11:15:33.404882 2365166 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0908 11:15:33.405619 2365166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 11:15:33.457131 2365166 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 11:15:33.448029249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 11:15:33.457281 2365166 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I0908 11:15:33.457505 2365166 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
I0908 11:15:33.460447 2365166 out.go:179] * Using Docker driver with root privileges
I0908 11:15:33.463392 2365166 cni.go:84] Creating CNI manager for ""
I0908 11:15:33.463465 2365166 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0908 11:15:33.463478 2365166 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I0908 11:15:33.463554 2365166 start.go:348] cluster config:
{Name:force-systemd-flag-544610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:force-systemd-flag-544610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0908 11:15:33.466637 2365166 out.go:179] * Starting "force-systemd-flag-544610" primary control-plane node in "force-systemd-flag-544610" cluster
I0908 11:15:33.469423 2365166 cache.go:123] Beginning downloading kic base image for docker with containerd
I0908 11:15:33.472287 2365166 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
I0908 11:15:33.475020 2365166 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime containerd
I0908 11:15:33.475070 2365166 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4
I0908 11:15:33.475095 2365166 cache.go:58] Caching tarball of preloaded images
I0908 11:15:33.475130 2365166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
I0908 11:15:33.475178 2365166 preload.go:172] Found /home/jenkins/minikube-integration/21512-2177568/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0908 11:15:33.475187 2365166 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on containerd
I0908 11:15:33.475292 2365166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/force-systemd-flag-544610/config.json ...
I0908 11:15:33.475307 2365166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/force-systemd-flag-544610/config.json: {Name:mk237284fa5863b812083b624414697530a6a84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0908 11:15:33.494540 2365166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
I0908 11:15:33.494565 2365166 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
I0908 11:15:33.494583 2365166 cache.go:232] Successfully downloaded all kic artifacts
I0908 11:15:33.494613 2365166 start.go:360] acquireMachinesLock for force-systemd-flag-544610: {Name:mk4385bb117cbccf6a4f57f4ad083e2a9cb793e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0908 11:15:33.494716 2365166 start.go:364] duration metric: took 84.453µs to acquireMachinesLock for "force-systemd-flag-544610"
I0908 11:15:33.494756 2365166 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-544610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:force-systemd-flag-544610 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0908 11:15:33.494818 2365166 start.go:125] createHost starting for "" (driver="docker")
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
86bc177896d8d a25f5ef9c34c3 6 seconds ago Exited kube-scheduler 5 7a1acdf3ecaa4 kube-scheduler-kubernetes-upgrade-522575
2fe2b99c9fd22 d291939e99406 16 seconds ago Exited kube-apiserver 6 e7589f5873dab kube-apiserver-kubernetes-upgrade-522575
c4fdeec6e2a71 a1894772a478e 2 minutes ago Running etcd 0 d0e9c36c6b98e etcd-kubernetes-upgrade-522575
2976d17a55d2c 996be7e86d9b3 2 minutes ago Running kube-controller-manager 1 9b1e6ca29c9c3 kube-controller-manager-kubernetes-upgrade-522575
==> containerd <==
Sep 08 11:14:00 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:14:00.791939714Z" level=info msg="shim disconnected" id=a244981c39eb5a16e042689b87a3e3f626ab30bbcc0d3f059318aa12ceaac223 namespace=k8s.io
Sep 08 11:14:00 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:14:00.791988410Z" level=warning msg="cleaning up after shim disconnected" id=a244981c39eb5a16e042689b87a3e3f626ab30bbcc0d3f059318aa12ceaac223 namespace=k8s.io
Sep 08 11:14:00 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:14:00.792028820Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep 08 11:14:01 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:14:01.220962584Z" level=info msg="RemoveContainer for \"75ea6bfe10973d211d459543c1bb97679d8594451e03ec168521d1ae6373502b\""
Sep 08 11:14:01 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:14:01.233274243Z" level=info msg="RemoveContainer for \"75ea6bfe10973d211d459543c1bb97679d8594451e03ec168521d1ae6373502b\" returns successfully"
Sep 08 11:15:19 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:19.662855448Z" level=info msg="CreateContainer within sandbox \"e7589f5873dab8602a56e62c001de6b11ecabea20f0618007e32a2814c26d350\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:6,}"
Sep 08 11:15:19 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:19.686643045Z" level=info msg="CreateContainer within sandbox \"e7589f5873dab8602a56e62c001de6b11ecabea20f0618007e32a2814c26d350\" for &ContainerMetadata{Name:kube-apiserver,Attempt:6,} returns container id \"2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236\""
Sep 08 11:15:19 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:19.687555759Z" level=info msg="StartContainer for \"2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236\""
Sep 08 11:15:19 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:19.763970731Z" level=info msg="StartContainer for \"2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236\" returns successfully"
Sep 08 11:15:19 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:19.811651932Z" level=info msg="received exit event container_id:\"2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236\" id:\"2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236\" pid:3289 exit_status:1 exited_at:{seconds:1757330119 nanos:811213593}"
Sep 08 11:15:19 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:19.866082664Z" level=info msg="shim disconnected" id=2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236 namespace=k8s.io
Sep 08 11:15:19 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:19.866279230Z" level=warning msg="cleaning up after shim disconnected" id=2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236 namespace=k8s.io
Sep 08 11:15:19 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:19.866332496Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep 08 11:15:20 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:20.380307555Z" level=info msg="RemoveContainer for \"32403016dabc9b4a7ad20f2d1bf913daf09d43a74b9825e479d99be3ced3062f\""
Sep 08 11:15:20 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:20.386842397Z" level=info msg="RemoveContainer for \"32403016dabc9b4a7ad20f2d1bf913daf09d43a74b9825e479d99be3ced3062f\" returns successfully"
Sep 08 11:15:29 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:29.663091096Z" level=info msg="CreateContainer within sandbox \"7a1acdf3ecaa45fe3fa037b7b1aa86ab10084f0cc5d489254436954ac688a459\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:5,}"
Sep 08 11:15:29 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:29.682969315Z" level=info msg="CreateContainer within sandbox \"7a1acdf3ecaa45fe3fa037b7b1aa86ab10084f0cc5d489254436954ac688a459\" for &ContainerMetadata{Name:kube-scheduler,Attempt:5,} returns container id \"86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801\""
Sep 08 11:15:29 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:29.683647746Z" level=info msg="StartContainer for \"86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801\""
Sep 08 11:15:29 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:29.759815003Z" level=info msg="StartContainer for \"86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801\" returns successfully"
Sep 08 11:15:30 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:30.505277670Z" level=info msg="received exit event container_id:\"86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801\" id:\"86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801\" pid:3366 exit_status:1 exited_at:{seconds:1757330130 nanos:505039768}"
Sep 08 11:15:30 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:30.533629929Z" level=info msg="shim disconnected" id=86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801 namespace=k8s.io
Sep 08 11:15:30 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:30.533864171Z" level=warning msg="cleaning up after shim disconnected" id=86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801 namespace=k8s.io
Sep 08 11:15:30 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:30.533914270Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Sep 08 11:15:31 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:31.408536712Z" level=info msg="RemoveContainer for \"a244981c39eb5a16e042689b87a3e3f626ab30bbcc0d3f059318aa12ceaac223\""
Sep 08 11:15:31 kubernetes-upgrade-522575 containerd[1916]: time="2025-09-08T11:15:31.419601444Z" level=info msg="RemoveContainer for \"a244981c39eb5a16e042689b87a3e3f626ab30bbcc0d3f059318aa12ceaac223\" returns successfully"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
==> dmesg <==
[Sep 8 10:33] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [c4fdeec6e2a716e5dc7a482dc1251f1184355935e79e5f7ed411dff1fa7e3e96] <==
{"level":"info","ts":"2025-09-08T11:12:58.391930Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 3"}
{"level":"info","ts":"2025-09-08T11:12:58.391980Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 3"}
{"level":"info","ts":"2025-09-08T11:12:58.392072Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
{"level":"info","ts":"2025-09-08T11:12:58.392132Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2025-09-08T11:12:58.392183Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 4"}
{"level":"info","ts":"2025-09-08T11:12:58.396083Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
{"level":"info","ts":"2025-09-08T11:12:58.396126Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2025-09-08T11:12:58.396149Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 4"}
{"level":"info","ts":"2025-09-08T11:12:58.396159Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
{"level":"info","ts":"2025-09-08T11:12:58.397352Z","caller":"etcdserver/server.go:2409","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
{"level":"info","ts":"2025-09-08T11:12:58.398388Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-522575 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
{"level":"info","ts":"2025-09-08T11:12:58.398573Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-09-08T11:12:58.398724Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-09-08T11:12:58.398901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-09-08T11:12:58.398928Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-09-08T11:12:58.399168Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.5","to":"3.6"}
{"level":"info","ts":"2025-09-08T11:12:58.399330Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
{"level":"info","ts":"2025-09-08T11:12:58.399366Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
{"level":"info","ts":"2025-09-08T11:12:58.399534Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
{"level":"info","ts":"2025-09-08T11:12:58.399591Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
{"level":"warn","ts":"2025-09-08T11:12:58.400421Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
{"level":"info","ts":"2025-09-08T11:12:58.400658Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-09-08T11:12:58.400455Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-09-08T11:12:58.402898Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
{"level":"info","ts":"2025-09-08T11:12:58.404725Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
==> kernel <==
11:16:36 up 16:59, 0 users, load average: 2.09, 2.52, 2.46
Linux kubernetes-upgrade-522575 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236] <==
I0908 11:15:19.802328 1 options.go:263] external host was not specified, using 192.168.76.2
I0908 11:15:19.805023 1 server.go:150] Version: v1.34.0
I0908 11:15:19.805163 1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
E0908 11:15:19.805556 1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use"
==> kube-controller-manager [2976d17a55d2c9a4a6e516f21cae9c207f3231ce96669aa4cf05903f4040f8cf] <==
I0908 11:13:54.461974 1 controllermanager.go:781] "Started controller" controller="ttl-after-finished-controller"
I0908 11:13:54.462003 1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
I0908 11:13:54.462175 1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
I0908 11:13:54.462265 1 shared_informer.go:349] "Waiting for caches to sync" controller="TTL after finished"
I0908 11:13:54.562655 1 controllermanager.go:781] "Started controller" controller="validatingadmissionpolicy-status-controller"
I0908 11:13:54.562686 1 controllermanager.go:759] "Warning: skipping controller" controller="storage-version-migrator-controller"
I0908 11:13:54.562732 1 shared_informer.go:349] "Waiting for caches to sync" controller="validatingadmissionpolicy-status"
I0908 11:13:54.611908 1 controllermanager.go:781] "Started controller" controller="statefulset-controller"
I0908 11:13:54.612251 1 stateful_set.go:169] "Starting stateful set controller" logger="statefulset-controller"
I0908 11:13:54.612278 1 shared_informer.go:349] "Waiting for caches to sync" controller="stateful set"
I0908 11:13:54.666071 1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
I0908 11:13:54.666098 1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
I0908 11:13:54.666149 1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
I0908 11:13:54.666162 1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-serving"
I0908 11:13:54.666189 1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
I0908 11:13:54.666290 1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
I0908 11:13:54.666964 1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
I0908 11:13:54.666984 1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
I0908 11:13:54.667017 1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
I0908 11:13:54.667745 1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
I0908 11:13:54.667803 1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
I0908 11:13:54.667811 1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
I0908 11:13:54.667835 1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
E0908 11:14:54.821264 1 cidr_allocator.go:125] "Failed to list all nodes" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)" logger="node-ipam-controller"
E0908 11:15:54.822673 1 cidr_allocator.go:125] "Failed to list all nodes" err="the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)" logger="node-ipam-controller"
==> kube-scheduler [86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801] <==
I0908 11:15:30.500055 1 serving.go:386] Generated self-signed cert in-memory
E0908 11:15:30.501365 1 run.go:72] "command failed" err="failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use"
==> kubelet <==
Sep 08 11:15:48 kubernetes-upgrade-522575 kubelet[1068]: I0908 11:15:48.660195 1068 scope.go:117] "RemoveContainer" containerID="86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801"
Sep 08 11:15:48 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:15:48.660814 1068 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-kubernetes-upgrade-522575_kube-system(63e04d7d76b2f53f80bb3e9b18a6cb69)\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-522575" podUID="63e04d7d76b2f53f80bb3e9b18a6cb69"
Sep 08 11:15:49 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:15:49.424757 1068 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-522575\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-522575?timeout=10s\": context deadline exceeded"
Sep 08 11:15:50 kubernetes-upgrade-522575 kubelet[1068]: I0908 11:15:50.663517 1068 scope.go:117] "RemoveContainer" containerID="2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236"
Sep 08 11:15:50 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:15:50.663700 1068 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-522575_kube-system(9d5e16346e954f3f07640c8beb47c2bb)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-522575" podUID="9d5e16346e954f3f07640c8beb47c2bb"
Sep 08 11:15:56 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:15:56.086315 1068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-522575?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
Sep 08 11:15:59 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:15:59.425614 1068 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-522575\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-522575?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Sep 08 11:15:59 kubernetes-upgrade-522575 kubelet[1068]: I0908 11:15:59.660666 1068 scope.go:117] "RemoveContainer" containerID="86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801"
Sep 08 11:15:59 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:15:59.660841 1068 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-kubernetes-upgrade-522575_kube-system(63e04d7d76b2f53f80bb3e9b18a6cb69)\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-522575" podUID="63e04d7d76b2f53f80bb3e9b18a6cb69"
Sep 08 11:15:59 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:15:59.662561 1068 kubelet.go:3221] "Failed creating a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/etcd-kubernetes-upgrade-522575"
Sep 08 11:16:05 kubernetes-upgrade-522575 kubelet[1068]: I0908 11:16:05.660514 1068 scope.go:117] "RemoveContainer" containerID="2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236"
Sep 08 11:16:05 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:05.661125 1068 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-522575_kube-system(9d5e16346e954f3f07640c8beb47c2bb)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-522575" podUID="9d5e16346e954f3f07640c8beb47c2bb"
Sep 08 11:16:09 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:09.426451 1068 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-522575\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-522575?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Sep 08 11:16:11 kubernetes-upgrade-522575 kubelet[1068]: I0908 11:16:11.660825 1068 scope.go:117] "RemoveContainer" containerID="86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801"
Sep 08 11:16:11 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:11.661477 1068 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-kubernetes-upgrade-522575_kube-system(63e04d7d76b2f53f80bb3e9b18a6cb69)\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-522575" podUID="63e04d7d76b2f53f80bb3e9b18a6cb69"
Sep 08 11:16:13 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:13.086822 1068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-522575?timeout=10s\": context deadline exceeded" interval="7s"
Sep 08 11:16:19 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:19.427070 1068 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-522575\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-522575?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Sep 08 11:16:19 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:19.427110 1068 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
Sep 08 11:16:19 kubernetes-upgrade-522575 kubelet[1068]: I0908 11:16:19.660106 1068 scope.go:117] "RemoveContainer" containerID="2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236"
Sep 08 11:16:19 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:19.660409 1068 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-522575_kube-system(9d5e16346e954f3f07640c8beb47c2bb)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-522575" podUID="9d5e16346e954f3f07640c8beb47c2bb"
Sep 08 11:16:22 kubernetes-upgrade-522575 kubelet[1068]: I0908 11:16:22.660433 1068 scope.go:117] "RemoveContainer" containerID="86bc177896d8da677852c1858728475f3d75f6c48fead29736a9bdb905964801"
Sep 08 11:16:22 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:22.660616 1068 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-kubernetes-upgrade-522575_kube-system(63e04d7d76b2f53f80bb3e9b18a6cb69)\"" pod="kube-system/kube-scheduler-kubernetes-upgrade-522575" podUID="63e04d7d76b2f53f80bb3e9b18a6cb69"
Sep 08 11:16:30 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:30.089335 1068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-522575?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
Sep 08 11:16:30 kubernetes-upgrade-522575 kubelet[1068]: I0908 11:16:30.660690 1068 scope.go:117] "RemoveContainer" containerID="2fe2b99c9fd22e0de412561ff1d9de3915d9264f68977fc852913b27d669a236"
Sep 08 11:16:30 kubernetes-upgrade-522575 kubelet[1068]: E0908 11:16:30.661007 1068 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-522575_kube-system(9d5e16346e954f3f07640c8beb47c2bb)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-522575" podUID="9d5e16346e954f3f07640c8beb47c2bb"
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-522575 -n kubernetes-upgrade-522575
E0908 11:16:42.868031 2179425 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21512-2177568/.minikube/profiles/addons-151437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-522575 -n kubernetes-upgrade-522575: exit status 2 (14.004280693s)
-- stdout --
Error
-- /stdout --
** stderr **
E0908 11:16:50.925045 2371211 status.go:466] Error apiserver status: https://192.168.76.2:8443/healthz returned error 500:
[+]ping ok
[-]log failed: reason withheld
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
** /stderr **
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-522575" apiserver is not running, skipping kubectl commands (state="Error")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-522575" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-arm64 delete -p kubernetes-upgrade-522575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-522575: (4.510415941s)
--- FAIL: TestKubernetesUpgrade (355.47s)