=== RUN TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade
=== CONT TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (28.594543996s)
version_upgrade_test.go:227: (dbg) Run: out/minikube-linux-amd64 stop -p kubernetes-upgrade-896338
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-896338: (4.45198081s)
version_upgrade_test.go:232: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-896338 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-896338 status --format={{.Host}}: exit status 7 (86.432106ms)
-- stdout --
Stopped
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: (29.675351354s)
version_upgrade_test.go:248: (dbg) Run: kubectl --context kubernetes-upgrade-896338 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker --container-runtime=containerd: exit status 106 (89.202915ms)
-- stdout --
* [kubernetes-upgrade-896338] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21924
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
-- /stdout --
** stderr **
X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
* Suggestion:
1) Recreate the cluster with Kubernetes 1.28.0, by running:
minikube delete -p kubernetes-upgrade-896338
minikube start -p kubernetes-upgrade-896338 --kubernetes-version=v1.28.0
2) Create a second cluster with Kubernetes 1.28.0, by running:
minikube start -p kubernetes-upgrade-8963382 --kubernetes-version=v1.28.0
3) Use the existing cluster at version Kubernetes 1.34.1, by running:
minikube start -p kubernetes-upgrade-896338 --kubernetes-version=v1.34.1
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 80 (7m20.450450929s)
-- stdout --
* [kubernetes-upgrade-896338] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21924
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on existing profile
* Starting "kubernetes-upgrade-896338" primary control-plane node in "kubernetes-upgrade-896338" cluster
* Pulling base image v0.0.48-1763507788-21924 ...
* Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons:
-- /stdout --
** stderr **
I1119 02:26:26.095711 208368 out.go:360] Setting OutFile to fd 1 ...
I1119 02:26:26.095863 208368 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:26:26.095875 208368 out.go:374] Setting ErrFile to fd 2...
I1119 02:26:26.095882 208368 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:26:26.096125 208368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
I1119 02:26:26.113581 208368 out.go:368] Setting JSON to false
I1119 02:26:26.115015 208368 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4126,"bootTime":1763515060,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1119 02:26:26.115148 208368 start.go:143] virtualization: kvm guest
I1119 02:26:26.116794 208368 out.go:179] * [kubernetes-upgrade-896338] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1119 02:26:26.118392 208368 out.go:179] - MINIKUBE_LOCATION=21924
I1119 02:26:26.118393 208368 notify.go:221] Checking for updates...
I1119 02:26:26.120772 208368 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1119 02:26:26.122416 208368 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
I1119 02:26:26.124418 208368 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
I1119 02:26:26.128814 208368 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1119 02:26:26.130090 208368 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1119 02:26:26.131935 208368 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:26:26.132583 208368 driver.go:422] Setting default libvirt URI to qemu:///system
I1119 02:26:26.168868 208368 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
I1119 02:26:26.168950 208368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1119 02:26:26.251452 208368 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:26:26.240245553 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1119 02:26:26.251575 208368 docker.go:319] overlay module found
I1119 02:26:26.253351 208368 out.go:179] * Using the docker driver based on existing profile
I1119 02:26:26.254517 208368 start.go:309] selected driver: docker
I1119 02:26:26.254535 208368 start.go:930] validating driver "docker" against &{Name:kubernetes-upgrade-896338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1119 02:26:26.254629 208368 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1119 02:26:26.255515 208368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1119 02:26:26.329891 208368 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 02:26:26.317174636 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1119 02:26:26.330237 208368 cni.go:84] Creating CNI manager for ""
I1119 02:26:26.330299 208368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1119 02:26:26.330349 208368 start.go:353] cluster config:
{Name:kubernetes-upgrade-896338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1119 02:26:26.332985 208368 out.go:179] * Starting "kubernetes-upgrade-896338" primary control-plane node in "kubernetes-upgrade-896338" cluster
I1119 02:26:26.334248 208368 cache.go:134] Beginning downloading kic base image for docker with containerd
I1119 02:26:26.335658 208368 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
I1119 02:26:26.337047 208368 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1119 02:26:26.337086 208368 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
I1119 02:26:26.337095 208368 cache.go:65] Caching tarball of preloaded images
I1119 02:26:26.337176 208368 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
I1119 02:26:26.337205 208368 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I1119 02:26:26.337325 208368 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
I1119 02:26:26.337488 208368 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/config.json ...
I1119 02:26:26.362337 208368 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
I1119 02:26:26.362357 208368 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
I1119 02:26:26.362393 208368 cache.go:243] Successfully downloaded all kic artifacts
I1119 02:26:26.362420 208368 start.go:360] acquireMachinesLock for kubernetes-upgrade-896338: {Name:mkcc2d1156d34e99d5c80a4b60172f822d6bf4cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1119 02:26:26.362479 208368 start.go:364] duration metric: took 38.96µs to acquireMachinesLock for "kubernetes-upgrade-896338"
I1119 02:26:26.362502 208368 start.go:96] Skipping create...Using existing machine configuration
I1119 02:26:26.362507 208368 fix.go:54] fixHost starting:
I1119 02:26:26.362710 208368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-896338 --format={{.State.Status}}
I1119 02:26:26.386432 208368 fix.go:112] recreateIfNeeded on kubernetes-upgrade-896338: state=Running err=<nil>
W1119 02:26:26.386456 208368 fix.go:138] unexpected machine state, will restart: <nil>
I1119 02:26:26.388067 208368 out.go:252] * Updating the running docker "kubernetes-upgrade-896338" container ...
I1119 02:26:26.388102 208368 machine.go:94] provisionDockerMachine start ...
I1119 02:26:26.388168 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:26.411458 208368 main.go:143] libmachine: Using SSH client type: native
I1119 02:26:26.411844 208368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 127.0.0.1 32989 <nil> <nil>}
I1119 02:26:26.411864 208368 main.go:143] libmachine: About to run SSH command:
hostname
I1119 02:26:26.549789 208368 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-896338
I1119 02:26:26.549824 208368 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-896338"
I1119 02:26:26.549893 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:26.574492 208368 main.go:143] libmachine: Using SSH client type: native
I1119 02:26:26.574788 208368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 127.0.0.1 32989 <nil> <nil>}
I1119 02:26:26.574808 208368 main.go:143] libmachine: About to run SSH command:
sudo hostname kubernetes-upgrade-896338 && echo "kubernetes-upgrade-896338" | sudo tee /etc/hostname
I1119 02:26:26.721175 208368 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-896338
I1119 02:26:26.721268 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:26.742777 208368 main.go:143] libmachine: Using SSH client type: native
I1119 02:26:26.743043 208368 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 127.0.0.1 32989 <nil> <nil>}
I1119 02:26:26.743076 208368 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\skubernetes-upgrade-896338' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-896338/g' /etc/hosts;
else
echo '127.0.1.1 kubernetes-upgrade-896338' | sudo tee -a /etc/hosts;
fi
fi
I1119 02:26:26.883387 208368 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1119 02:26:26.883418 208368 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
I1119 02:26:26.883464 208368 ubuntu.go:190] setting up certificates
I1119 02:26:26.883477 208368 provision.go:84] configureAuth start
I1119 02:26:26.883545 208368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-896338
I1119 02:26:26.907583 208368 provision.go:143] copyHostCerts
I1119 02:26:26.907661 208368 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
I1119 02:26:26.907683 208368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
I1119 02:26:26.907775 208368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
I1119 02:26:26.907916 208368 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
I1119 02:26:26.907928 208368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
I1119 02:26:26.907972 208368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
I1119 02:26:26.908068 208368 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
I1119 02:26:26.908080 208368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
I1119 02:26:26.908114 208368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
I1119 02:26:26.908213 208368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-896338 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-896338 localhost minikube]
I1119 02:26:27.007550 208368 provision.go:177] copyRemoteCerts
I1119 02:26:27.007602 208368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1119 02:26:27.007645 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:27.028966 208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
I1119 02:26:27.133845 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1119 02:26:27.155962 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1119 02:26:27.175670 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1119 02:26:27.194186 208368 provision.go:87] duration metric: took 310.696225ms to configureAuth
I1119 02:26:27.194216 208368 ubuntu.go:206] setting minikube options for container-runtime
I1119 02:26:27.194434 208368 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:26:27.194449 208368 machine.go:97] duration metric: took 806.340026ms to provisionDockerMachine
I1119 02:26:27.194457 208368 start.go:293] postStartSetup for "kubernetes-upgrade-896338" (driver="docker")
I1119 02:26:27.194466 208368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1119 02:26:27.194512 208368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1119 02:26:27.194547 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:27.215502 208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
I1119 02:26:27.314396 208368 ssh_runner.go:195] Run: cat /etc/os-release
I1119 02:26:27.318628 208368 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1119 02:26:27.318654 208368 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1119 02:26:27.318665 208368 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
I1119 02:26:27.318716 208368 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
I1119 02:26:27.318795 208368 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
I1119 02:26:27.318890 208368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1119 02:26:27.333415 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
I1119 02:26:27.354169 208368 start.go:296] duration metric: took 159.698533ms for postStartSetup
I1119 02:26:27.354255 208368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1119 02:26:27.354300 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:27.376380 208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
I1119 02:26:27.476444 208368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1119 02:26:27.481683 208368 fix.go:56] duration metric: took 1.11916901s for fixHost
I1119 02:26:27.481709 208368 start.go:83] releasing machines lock for "kubernetes-upgrade-896338", held for 1.119217915s
I1119 02:26:27.481771 208368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-896338
I1119 02:26:27.501873 208368 ssh_runner.go:195] Run: cat /version.json
I1119 02:26:27.501928 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:27.502070 208368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1119 02:26:27.502126 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:27.525894 208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
I1119 02:26:27.527315 208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
I1119 02:26:27.711750 208368 ssh_runner.go:195] Run: systemctl --version
I1119 02:26:27.719641 208368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1119 02:26:27.724548 208368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1119 02:26:27.724606 208368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1119 02:26:27.734240 208368 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1119 02:26:27.734265 208368 start.go:496] detecting cgroup driver to use...
I1119 02:26:27.734296 208368 detect.go:190] detected "systemd" cgroup driver on host os
I1119 02:26:27.734338 208368 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1119 02:26:27.749150 208368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1119 02:26:27.764722 208368 docker.go:218] disabling cri-docker service (if available) ...
I1119 02:26:27.764774 208368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1119 02:26:27.782122 208368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1119 02:26:27.795957 208368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1119 02:26:27.901664 208368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1119 02:26:28.020065 208368 docker.go:234] disabling docker service ...
I1119 02:26:28.020126 208368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1119 02:26:28.038442 208368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1119 02:26:28.053200 208368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1119 02:26:28.193711 208368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1119 02:26:28.347018 208368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1119 02:26:28.361220 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1119 02:26:28.380079 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1119 02:26:28.390885 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1119 02:26:28.401695 208368 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1119 02:26:28.401758 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1119 02:26:28.411884 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1119 02:26:28.422027 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1119 02:26:28.431897 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1119 02:26:28.442680 208368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1119 02:26:28.452218 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1119 02:26:28.461863 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1119 02:26:28.471641 208368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1119 02:26:28.483087 208368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1119 02:26:28.491476 208368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1119 02:26:28.500101 208368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1119 02:26:28.614400 208368 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1119 02:26:28.768022 208368 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1119 02:26:28.768122 208368 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1119 02:26:28.775224 208368 start.go:564] Will wait 60s for crictl version
I1119 02:26:28.775340 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:28.781105 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1119 02:26:28.816521 208368 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.1.5
RuntimeApiVersion: v1
I1119 02:26:28.816595 208368 ssh_runner.go:195] Run: containerd --version
I1119 02:26:28.844292 208368 ssh_runner.go:195] Run: containerd --version
I1119 02:26:28.874792 208368 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
I1119 02:26:28.876538 208368 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-896338 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 02:26:28.901031 208368 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1119 02:26:28.907217 208368 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-896338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1119 02:26:28.907592 208368 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1119 02:26:28.907837 208368 ssh_runner.go:195] Run: sudo crictl images --output json
I1119 02:26:28.947182 208368 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-scheduler:v1.34.1". assuming images are not preloaded.
I1119 02:26:28.947328 208368 ssh_runner.go:195] Run: which lz4
I1119 02:26:28.952694 208368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1119 02:26:28.958484 208368 ssh_runner.go:356] copy: skipping /preloaded.tar.lz4 (exists)
I1119 02:26:28.958508 208368 containerd.go:563] duration metric: took 5.861643ms to copy over tarball
I1119 02:26:28.958566 208368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1119 02:26:32.276678 208368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.318087658s)
I1119 02:26:32.276762 208368 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball:
** stderr **
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
tar: Exiting with failure status due to previous errors
** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
stdout:
stderr:
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
tar: Exiting with failure status due to previous errors
I1119 02:26:32.276873 208368 ssh_runner.go:195] Run: sudo crictl images --output json
I1119 02:26:32.306738 208368 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-scheduler:v1.34.1". assuming images are not preloaded.
I1119 02:26:32.306763 208368 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
I1119 02:26:32.306933 208368 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
I1119 02:26:32.306991 208368 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
I1119 02:26:32.307043 208368 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
I1119 02:26:32.307103 208368 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:26:32.306961 208368 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
I1119 02:26:32.306996 208368 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
I1119 02:26:32.306960 208368 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
I1119 02:26:32.307545 208368 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
I1119 02:26:32.308746 208368 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
I1119 02:26:32.308830 208368 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
I1119 02:26:32.308966 208368 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
I1119 02:26:32.309030 208368 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
I1119 02:26:32.309010 208368 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
I1119 02:26:32.309062 208368 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:26:32.309177 208368 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
I1119 02:26:32.309303 208368 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
I1119 02:26:32.473580 208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
I1119 02:26:32.473652 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
I1119 02:26:32.482356 208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
I1119 02:26:32.482461 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
I1119 02:26:32.482459 208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
I1119 02:26:32.482537 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
I1119 02:26:32.509847 208368 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
I1119 02:26:32.509948 208368 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
I1119 02:26:32.510008 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:32.512254 208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
I1119 02:26:32.512315 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
I1119 02:26:32.516765 208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
I1119 02:26:32.516839 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
I1119 02:26:32.520899 208368 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
I1119 02:26:32.521016 208368 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
I1119 02:26:32.521087 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:32.521427 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I1119 02:26:32.521537 208368 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
I1119 02:26:32.521570 208368 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
I1119 02:26:32.521632 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:32.522473 208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
I1119 02:26:32.522519 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
I1119 02:26:32.524864 208368 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
I1119 02:26:32.524949 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
I1119 02:26:32.549668 208368 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
I1119 02:26:32.549718 208368 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
I1119 02:26:32.549772 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:32.563761 208368 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
I1119 02:26:32.563807 208368 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
I1119 02:26:32.563866 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:32.563990 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
I1119 02:26:32.568000 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I1119 02:26:32.568327 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
I1119 02:26:32.571812 208368 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
I1119 02:26:32.572115 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I1119 02:26:32.572125 208368 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
I1119 02:26:32.572181 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:32.572056 208368 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
I1119 02:26:32.572218 208368 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
I1119 02:26:32.572238 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:32.572307 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1119 02:26:32.659950 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
I1119 02:26:32.660037 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I1119 02:26:32.660107 208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
I1119 02:26:32.660255 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
I1119 02:26:32.660849 208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
I1119 02:26:32.660989 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
I1119 02:26:32.661014 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
I1119 02:26:32.706989 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
I1119 02:26:32.707021 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
I1119 02:26:32.707049 208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
I1119 02:26:32.795059 208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
I1119 02:26:32.795132 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
I1119 02:26:32.795168 208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
I1119 02:26:32.795181 208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
I1119 02:26:32.832828 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
I1119 02:26:32.864167 208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
I1119 02:26:33.619208 208368 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
I1119 02:26:33.619275 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:26:33.644910 208368 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1119 02:26:33.644965 208368 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:26:33.645010 208368 ssh_runner.go:195] Run: which crictl
I1119 02:26:33.650104 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:26:33.678431 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:26:33.706942 208368 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:26:33.735119 208368 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1119 02:26:33.735207 208368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1119 02:26:33.739320 208368 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
I1119 02:26:33.739346 208368 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1119 02:26:33.739417 208368 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1119 02:26:33.957571 208368 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1119 02:26:33.957634 208368 cache_images.go:94] duration metric: took 1.650855136s to LoadCachedImages
W1119 02:26:33.957710 208368 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1: no such file or directory
X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/21924-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1: no such file or directory
I1119 02:26:33.957725 208368 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
I1119 02:26:33.957842 208368 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-896338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1119 02:26:33.957910 208368 ssh_runner.go:195] Run: sudo crictl info
I1119 02:26:33.988381 208368 cni.go:84] Creating CNI manager for ""
I1119 02:26:33.988404 208368 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1119 02:26:33.988422 208368 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1119 02:26:33.988451 208368 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-896338 NodeName:kubernetes-upgrade-896338 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1119 02:26:33.988606 208368 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "kubernetes-upgrade-896338"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1119 02:26:33.988691 208368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1119 02:26:33.999039 208368 binaries.go:51] Found k8s binaries, skipping transfer
I1119 02:26:33.999108 208368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1119 02:26:34.007943 208368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I1119 02:26:34.025266 208368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1119 02:26:34.041893 208368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1119 02:26:34.055278 208368 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1119 02:26:34.059867 208368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1119 02:26:34.168130 208368 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1119 02:26:34.188952 208368 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338 for IP: 192.168.85.2
I1119 02:26:34.188975 208368 certs.go:195] generating shared ca certs ...
I1119 02:26:34.188994 208368 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:26:34.189150 208368 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
I1119 02:26:34.189210 208368 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
I1119 02:26:34.189218 208368 certs.go:257] generating profile certs ...
I1119 02:26:34.189309 208368 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key
I1119 02:26:34.189359 208368 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/apiserver.key.6cf5ace0
I1119 02:26:34.189420 208368 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/proxy-client.key
I1119 02:26:34.189559 208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
W1119 02:26:34.189589 208368 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
I1119 02:26:34.189599 208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
I1119 02:26:34.189629 208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
I1119 02:26:34.189658 208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
I1119 02:26:34.189687 208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
I1119 02:26:34.189735 208368 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
I1119 02:26:34.190526 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1119 02:26:34.220408 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1119 02:26:34.247878 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1119 02:26:34.276203 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1119 02:26:34.299606 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
I1119 02:26:34.321787 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1119 02:26:34.340957 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1119 02:26:34.363167 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1119 02:26:34.386180 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1119 02:26:34.407202 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
I1119 02:26:34.429119 208368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
I1119 02:26:34.450412 208368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1119 02:26:34.465425 208368 ssh_runner.go:195] Run: openssl version
I1119 02:26:34.473391 208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1119 02:26:34.483962 208368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1119 02:26:34.488737 208368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
I1119 02:26:34.488800 208368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1119 02:26:34.529195 208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1119 02:26:34.539184 208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
I1119 02:26:34.549392 208368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
I1119 02:26:34.554191 208368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
I1119 02:26:34.554255 208368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
I1119 02:26:34.595783 208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
I1119 02:26:34.607410 208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
I1119 02:26:34.621578 208368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
I1119 02:26:34.629723 208368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
I1119 02:26:34.629786 208368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
I1119 02:26:34.675928 208368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
I1119 02:26:34.685390 208368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1119 02:26:34.690500 208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1119 02:26:34.740572 208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1119 02:26:34.791600 208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1119 02:26:34.832919 208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1119 02:26:34.880790 208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1119 02:26:34.934254 208368 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1119 02:26:35.000402 208368 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-896338 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-896338 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1119 02:26:35.000499 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1119 02:26:35.000558 208368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1119 02:26:35.045018 208368 cri.go:89] found id: ""
I1119 02:26:35.045078 208368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1119 02:26:35.058407 208368 kubeadm.go:417] found existing configuration files, will attempt cluster restart
I1119 02:26:35.058427 208368 kubeadm.go:598] restartPrimaryControlPlane start ...
I1119 02:26:35.058476 208368 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1119 02:26:35.069042 208368 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1119 02:26:35.069857 208368 kubeconfig.go:125] found "kubernetes-upgrade-896338" server: "https://192.168.85.2:8443"
I1119 02:26:35.070789 208368 kapi.go:59] client config for kubernetes-upgrade-896338: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.crt", KeyFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key", CAFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1119 02:26:35.071327 208368 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1119 02:26:35.071349 208368 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1119 02:26:35.071355 208368 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1119 02:26:35.071361 208368 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1119 02:26:35.071383 208368 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1119 02:26:35.071789 208368 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1119 02:26:35.082968 208368 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
I1119 02:26:35.083054 208368 kubeadm.go:602] duration metric: took 24.617333ms to restartPrimaryControlPlane
I1119 02:26:35.083083 208368 kubeadm.go:403] duration metric: took 82.694531ms to StartCluster
I1119 02:26:35.083115 208368 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:26:35.083201 208368 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21924-11107/kubeconfig
I1119 02:26:35.084225 208368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:26:35.084544 208368 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1119 02:26:35.084723 208368 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:26:35.084786 208368 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1119 02:26:35.084893 208368 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-896338"
I1119 02:26:35.084909 208368 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-896338"
W1119 02:26:35.084917 208368 addons.go:248] addon storage-provisioner should already be in state true
I1119 02:26:35.085013 208368 host.go:66] Checking if "kubernetes-upgrade-896338" exists ...
I1119 02:26:35.084965 208368 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-896338"
I1119 02:26:35.085102 208368 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-896338"
I1119 02:26:35.085512 208368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-896338 --format={{.State.Status}}
I1119 02:26:35.085541 208368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-896338 --format={{.State.Status}}
I1119 02:26:35.087156 208368 out.go:179] * Verifying Kubernetes components...
I1119 02:26:35.088522 208368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1119 02:26:35.113426 208368 kapi.go:59] client config for kubernetes-upgrade-896338: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.crt", KeyFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/profiles/kubernetes-upgrade-896338/client.key", CAFile:"/home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2825740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1119 02:26:35.113774 208368 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-896338"
W1119 02:26:35.113797 208368 addons.go:248] addon default-storageclass should already be in state true
I1119 02:26:35.113825 208368 host.go:66] Checking if "kubernetes-upgrade-896338" exists ...
I1119 02:26:35.114304 208368 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-896338 --format={{.State.Status}}
I1119 02:26:35.116489 208368 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:26:35.117748 208368 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1119 02:26:35.117768 208368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1119 02:26:35.117837 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:35.146800 208368 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1119 02:26:35.146822 208368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1119 02:26:35.146879 208368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-896338
I1119 02:26:35.154838 208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
I1119 02:26:35.178909 208368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/kubernetes-upgrade-896338/id_rsa Username:docker}
I1119 02:26:35.241005 208368 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1119 02:26:35.260582 208368 api_server.go:52] waiting for apiserver process to appear ...
I1119 02:26:35.260679 208368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1119 02:26:35.273672 208368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1119 02:26:35.279303 208368 api_server.go:72] duration metric: took 194.706746ms to wait for apiserver process to appear ...
I1119 02:26:35.279335 208368 api_server.go:88] waiting for apiserver healthz status ...
I1119 02:26:35.279355 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:35.302621 208368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1119 02:26:37.285536 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:37.285580 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:37.285604 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:39.291386 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:39.291427 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:39.291443 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:41.296716 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:41.296754 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:41.296773 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:43.301986 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:43.302018 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:43.302037 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:43.306106 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:43.306129 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:43.779719 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:45.784542 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:45.784578 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:45.784599 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:47.791094 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:47.791253 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:47.791291 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:49.797459 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:49.797493 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:49.797521 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:51.803353 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:51.803390 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:51.803406 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:53.808875 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:53.808921 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:53.808970 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:55.814340 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:55.814388 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:55.814419 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:57.819781 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:57.819812 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:57.819835 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:26:59.824558 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:26:59.824582 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:26:59.824597 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:27:01.830389 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1119 02:27:01.830419 208368 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1119 02:27:01.830438 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:27:06.831618 208368 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1119 02:27:06.831657 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:27:11.832438 208368 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1119 02:27:11.832478 208368 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1119 02:27:13.505962 208368 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I1119 02:27:13.512292 208368 api_server.go:141] control plane version: v1.34.1
I1119 02:27:13.512321 208368 api_server.go:131] duration metric: took 38.232976455s to wait for apiserver health ...
I1119 02:27:13.512332 208368 system_pods.go:43] waiting for kube-system pods to appear ...
W1119 02:28:13.513502 208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
I1119 02:28:13.513542 208368 retry.go:31] will retry after 218.685635ms: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
I1119 02:28:13.732637 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1119 02:28:13.732729 208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1119 02:31:35.676886 208368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5m0.403131384s)
W1119 02:31:35.676952 208368 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "storage-provisioner", Namespace: ""
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
W1119 02:31:35.677096 208368 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "storage-provisioner", Namespace: ""
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
]
! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "storage-provisioner", Namespace: ""
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io storage-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:persistent-volume-provisioner)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints k8s.io-minikube-hostpath)
Error from server (Timeout): error when retrieving current configuration of:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get pods storage-provisioner)
]
I1119 02:31:35.677167 208368 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5m0.374454709s)
W1119 02:31:35.677217 208368 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
Name: "standard", Namespace: ""
from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
I1119 02:31:35.677247 208368 ssh_runner.go:235] Completed: sudo crictl ps -a --quiet --name=kube-apiserver: (3m21.944479492s)
I1119 02:31:35.677269 208368 cri.go:89] found id: "24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9"
I1119 02:31:35.677275 208368 cri.go:89] found id: ""
I1119 02:31:35.677284 208368 logs.go:282] 1 containers: [24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9]
W1119 02:31:35.677299 208368 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
Name: "standard", Namespace: ""
from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
]
! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
Error from server (Timeout): error when retrieving current configuration of:
Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
Name: "standard", Namespace: ""
from server for: "/etc/kubernetes/addons/storageclass.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io standard)
]
I1119 02:31:35.677338 208368 ssh_runner.go:195] Run: which crictl
I1119 02:31:35.679965 208368 out.go:179] * Enabled addons:
I1119 02:31:35.681147 208368 addons.go:515] duration metric: took 5m0.596355676s for enable addons: enabled=[]
I1119 02:31:35.683740 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1119 02:31:35.683811 208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1119 02:31:35.725560 208368 cri.go:89] found id: "f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c"
I1119 02:31:35.725584 208368 cri.go:89] found id: ""
I1119 02:31:35.725593 208368 logs.go:282] 1 containers: [f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c]
I1119 02:31:35.725653 208368 ssh_runner.go:195] Run: which crictl
I1119 02:31:35.730817 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1119 02:31:35.730897 208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1119 02:31:35.774777 208368 cri.go:89] found id: ""
I1119 02:31:35.774803 208368 logs.go:282] 0 containers: []
W1119 02:31:35.774812 208368 logs.go:284] No container was found matching "coredns"
I1119 02:31:35.774818 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1119 02:31:35.774871 208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1119 02:31:35.817744 208368 cri.go:89] found id: "2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017"
I1119 02:31:35.817771 208368 cri.go:89] found id: ""
I1119 02:31:35.817781 208368 logs.go:282] 1 containers: [2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017]
I1119 02:31:35.817843 208368 ssh_runner.go:195] Run: which crictl
I1119 02:31:35.824002 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1119 02:31:35.824267 208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1119 02:31:35.869793 208368 cri.go:89] found id: ""
I1119 02:31:35.869824 208368 logs.go:282] 0 containers: []
W1119 02:31:35.869834 208368 logs.go:284] No container was found matching "kube-proxy"
I1119 02:31:35.869841 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1119 02:31:35.869898 208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1119 02:31:35.910763 208368 cri.go:89] found id: "1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03"
I1119 02:31:35.910785 208368 cri.go:89] found id: ""
I1119 02:31:35.910794 208368 logs.go:282] 1 containers: [1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03]
I1119 02:31:35.910866 208368 ssh_runner.go:195] Run: which crictl
I1119 02:31:35.916693 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1119 02:31:35.916769 208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1119 02:31:35.956637 208368 cri.go:89] found id: ""
I1119 02:31:35.956666 208368 logs.go:282] 0 containers: []
W1119 02:31:35.956677 208368 logs.go:284] No container was found matching "kindnet"
I1119 02:31:35.956684 208368 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1119 02:31:35.956758 208368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1119 02:31:35.998527 208368 cri.go:89] found id: ""
I1119 02:31:35.998632 208368 logs.go:282] 0 containers: []
W1119 02:31:35.998644 208368 logs.go:284] No container was found matching "storage-provisioner"
I1119 02:31:35.998661 208368 logs.go:123] Gathering logs for describe nodes ...
I1119 02:31:35.998678 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1119 02:32:36.105090 208368 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.106387506s)
W1119 02:32:36.105143 208368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
output:
** stderr **
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
** /stderr **
I1119 02:32:36.105158 208368 logs.go:123] Gathering logs for kube-apiserver [24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9] ...
I1119 02:32:36.105171 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9"
W1119 02:32:36.132063 208368 logs.go:130] failed kube-apiserver [24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9": Process exited with status 1
stdout:
stderr:
E1119 02:32:36.129725 3701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9\": not found" containerID="24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9"
time="2025-11-19T02:32:36Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9\": not found"
output:
** stderr **
E1119 02:32:36.129725 3701 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9\": not found" containerID="24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9"
time="2025-11-19T02:32:36Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9\": not found"
** /stderr **
I1119 02:32:36.132092 208368 logs.go:123] Gathering logs for etcd [f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c] ...
I1119 02:32:36.132115 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c"
I1119 02:32:36.168586 208368 logs.go:123] Gathering logs for kube-controller-manager [1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03] ...
I1119 02:32:36.168617 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03"
I1119 02:32:36.200394 208368 logs.go:123] Gathering logs for kubelet ...
I1119 02:32:36.200424 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1119 02:32:36.241938 208368 logs.go:138] Found kubelet problem: Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.359998 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
W1119 02:32:36.242091 208368 logs.go:138] Found kubelet problem: Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.367766 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
W1119 02:32:36.242234 208368 logs.go:138] Found kubelet problem: Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.369746 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
W1119 02:32:36.242373 208368 logs.go:138] Found kubelet problem: Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.371861 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
I1119 02:32:36.296651 208368 logs.go:123] Gathering logs for dmesg ...
I1119 02:32:36.296695 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1119 02:32:36.313476 208368 logs.go:123] Gathering logs for kube-scheduler [2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017] ...
I1119 02:32:36.313518 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017"
I1119 02:32:36.342697 208368 logs.go:123] Gathering logs for containerd ...
I1119 02:32:36.342725 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1119 02:32:36.408496 208368 logs.go:123] Gathering logs for container status ...
I1119 02:32:36.408523 208368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1119 02:32:36.439684 208368 out.go:374] Setting ErrFile to fd 2...
I1119 02:32:36.439708 208368 out.go:408] TERM=,COLORTERM=, which probably does not support color
W1119 02:32:36.439772 208368 out.go:285] X Problems detected in kubelet:
X Problems detected in kubelet:
W1119 02:32:36.439787 208368 out.go:285] Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.359998 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.359998 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
W1119 02:32:36.439797 208368 out.go:285] Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.367766 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.367766 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
W1119 02:32:36.439808 208368 out.go:285] Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.369746 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.369746 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="66027d99863065946c7e847721e63c6c" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
W1119 02:32:36.439819 208368 out.go:285] Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.371861 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
Nov 19 02:26:22 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:26:22.371861 1153 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-kubernetes-upgrade-896338\" is forbidden: User \"system:node:kubernetes-upgrade-896338\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'kubernetes-upgrade-896338' and this object" podUID="671496fd68155efc6c8e333483b2ec93" pod="kube-system/etcd-kubernetes-upgrade-896338"
I1119 02:32:36.439827 208368 out.go:374] Setting ErrFile to fd 2...
I1119 02:32:36.439842 208368 out.go:408] TERM=,COLORTERM=, which probably does not support color
W1119 02:33:46.450455 208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
I1119 02:33:46.452233 208368 out.go:203]
W1119 02:33:46.453522 208368 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
W1119 02:33:46.453544 208368 out.go:285] *
*
W1119 02:33:46.455831 208368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1119 02:33:46.457044 208368 out.go:203]
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-896338 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker --container-runtime=containerd: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-11-19 02:33:46.536002962 +0000 UTC m=+2246.038109760
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect kubernetes-upgrade-896338
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-896338:
-- stdout --
[
{
"Id": "969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe",
"Created": "2025-11-19T02:25:33.919793352Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 200955,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-11-19T02:25:56.498827348Z",
"FinishedAt": "2025-11-19T02:25:55.50311594Z"
},
"Image": "sha256:a368e3d71517ce17114afb6c9921965419df972dd0e2d32a9973a8946f0910a3",
"ResolvConfPath": "/var/lib/docker/containers/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe/hostname",
"HostsPath": "/var/lib/docker/containers/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe/hosts",
"LogPath": "/var/lib/docker/containers/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe/969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe-json.log",
"Name": "/kubernetes-upgrade-896338",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"kubernetes-upgrade-896338:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "kubernetes-upgrade-896338",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": null,
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 3221225472,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 6442450944,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "969b8bd4216afabb406559f3e1a22664d005617358fe9e598899f2ace66dabbe",
"LowerDir": "/var/lib/docker/overlay2/631bd77c3dbf585bbd3c946ea070c38ae4ca0251671d2b73c9f02da374b73bd4-init/diff:/var/lib/docker/overlay2/de7938e6a920c133c8c6b988444cfbf6706fdc6982445229ca70e2488a725edb/diff",
"MergedDir": "/var/lib/docker/overlay2/631bd77c3dbf585bbd3c946ea070c38ae4ca0251671d2b73c9f02da374b73bd4/merged",
"UpperDir": "/var/lib/docker/overlay2/631bd77c3dbf585bbd3c946ea070c38ae4ca0251671d2b73c9f02da374b73bd4/diff",
"WorkDir": "/var/lib/docker/overlay2/631bd77c3dbf585bbd3c946ea070c38ae4ca0251671d2b73c9f02da374b73bd4/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "kubernetes-upgrade-896338",
"Source": "/var/lib/docker/volumes/kubernetes-upgrade-896338/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "kubernetes-upgrade-896338",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "kubernetes-upgrade-896338",
"name.minikube.sigs.k8s.io": "kubernetes-upgrade-896338",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"SandboxID": "1153b4f9ef44bdc780101f996d65a35e48daf57d1eb0832294c5cf8db1dfc323",
"SandboxKey": "/var/run/docker/netns/1153b4f9ef44",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32989"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32990"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32993"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32991"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32992"
}
]
},
"Networks": {
"kubernetes-upgrade-896338": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2",
"IPv6Address": ""
},
"Links": null,
"Aliases": null,
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "3ec6f45a7001c9838b1db6d7bcbc836f8d598109023fa2e585c2ea7beed066aa",
"EndpointID": "7d69cba837f5e774db2e8b3f43d7f1317ce0691adab51bc67ff99a7934c17636",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"MacAddress": "82:24:4b:ee:ad:76",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"kubernetes-upgrade-896338",
"969b8bd4216a"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-896338 -n kubernetes-upgrade-896338
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-896338 -n kubernetes-upgrade-896338: exit status 2 (14.587429758s)
-- stdout --
Running
-- /stdout --
** stderr **
E1119 02:34:01.144008 319649 status.go:466] Error apiserver status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[-]log failed: reason withheld
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
** /stderr **
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p kubernetes-upgrade-896338 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-896338 logs -n 25: (1m0.931898717s)
helpers_test.go:260: TestKubernetesUpgrade logs:
-- stdout --
==> Audit <==
┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ -p bridge-212776 sudo journalctl -xeu kubelet --all --full --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo cat /etc/kubernetes/kubelet.conf │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo cat /var/lib/kubelet/config.yaml │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo systemctl status docker --all --full --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ │
│ ssh │ -p bridge-212776 sudo systemctl cat docker --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo cat /etc/docker/daemon.json │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ │
│ ssh │ -p bridge-212776 sudo docker system info │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ │
│ ssh │ -p bridge-212776 sudo systemctl status cri-docker --all --full --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ │
│ ssh │ -p bridge-212776 sudo systemctl cat cri-docker --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ │
│ ssh │ -p bridge-212776 sudo cat /usr/lib/systemd/system/cri-docker.service │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo cri-dockerd --version │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo systemctl status containerd --all --full --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo systemctl cat containerd --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo cat /lib/systemd/system/containerd.service │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo cat /etc/containerd/config.toml │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo containerd config dump │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo systemctl status crio --all --full --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ │
│ ssh │ -p bridge-212776 sudo systemctl cat crio --no-pager │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \; │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ ssh │ -p bridge-212776 sudo crio config │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ delete │ -p bridge-212776 │ bridge-212776 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ start │ -p embed-certs-168452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --container-runtime=containerd --kubernetes-version=v1.34.1 │ embed-certs-168452 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ │
│ addons │ enable metrics-server -p old-k8s-version-691094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ 19 Nov 25 02:33 UTC │
│ stop │ -p old-k8s-version-691094 --alsologtostderr -v=3 │ old-k8s-version-691094 │ jenkins │ v1.37.0 │ 19 Nov 25 02:33 UTC │ │
└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/19 02:33:19
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.24.6 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1119 02:33:19.818158 315363 out.go:360] Setting OutFile to fd 1 ...
I1119 02:33:19.818478 315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:33:19.818490 315363 out.go:374] Setting ErrFile to fd 2...
I1119 02:33:19.818495 315363 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 02:33:19.818721 315363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21924-11107/.minikube/bin
I1119 02:33:19.819330 315363 out.go:368] Setting JSON to false
I1119 02:33:19.820616 315363 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4540,"bootTime":1763515060,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1119 02:33:19.820746 315363 start.go:143] virtualization: kvm guest
I1119 02:33:19.822862 315363 out.go:179] * [embed-certs-168452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1119 02:33:19.824498 315363 notify.go:221] Checking for updates...
I1119 02:33:19.825083 315363 out.go:179] - MINIKUBE_LOCATION=21924
I1119 02:33:19.827189 315363 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1119 02:33:19.828628 315363 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21924-11107/kubeconfig
I1119 02:33:19.830282 315363 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21924-11107/.minikube
I1119 02:33:19.832156 315363 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1119 02:33:19.833558 315363 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1119 02:33:19.835289 315363 config.go:182] Loaded profile config "kubernetes-upgrade-896338": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:33:19.835456 315363 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:33:19.835531 315363 config.go:182] Loaded profile config "old-k8s-version-691094": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I1119 02:33:19.835628 315363 driver.go:422] Setting default libvirt URI to qemu:///system
I1119 02:33:19.869670 315363 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
I1119 02:33:19.869754 315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1119 02:33:19.948056 315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:19.935291829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1119 02:33:19.948230 315363 docker.go:319] overlay module found
I1119 02:33:19.949713 315363 out.go:179] * Using the docker driver based on user configuration
I1119 02:33:19.290831 301934 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1119 02:33:19.290855 301934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1119 02:33:19.290915 301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
I1119 02:33:19.311399 301934 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1119 02:33:19.311423 301934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1119 02:33:19.311589 301934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-691094
I1119 02:33:19.329209 301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
I1119 02:33:19.348646 301934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/old-k8s-version-691094/id_rsa Username:docker}
I1119 02:33:19.386878 301934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.103.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1119 02:33:19.430928 301934 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1119 02:33:19.450594 301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1119 02:33:19.476197 301934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1119 02:33:19.710133 301934 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
I1119 02:33:19.711417 301934 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-691094" to be "Ready" ...
I1119 02:33:19.994360 301934 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1119 02:33:19.950788 315363 start.go:309] selected driver: docker
I1119 02:33:19.950820 315363 start.go:930] validating driver "docker" against <nil>
I1119 02:33:19.950835 315363 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1119 02:33:19.951688 315363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1119 02:33:20.027806 315363 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-19 02:33:20.015781927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1119 02:33:20.028020 315363 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1119 02:33:20.028315 315363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1119 02:33:20.030421 315363 out.go:179] * Using Docker driver with root privileges
I1119 02:33:20.031895 315363 cni.go:84] Creating CNI manager for ""
I1119 02:33:20.031986 315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1119 02:33:20.031997 315363 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1119 02:33:20.032066 315363 start.go:353] cluster config:
{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1119 02:33:20.034765 315363 out.go:179] * Starting "embed-certs-168452" primary control-plane node in "embed-certs-168452" cluster
I1119 02:33:20.037487 315363 cache.go:134] Beginning downloading kic base image for docker with containerd
I1119 02:33:20.039029 315363 out.go:179] * Pulling base image v0.0.48-1763507788-21924 ...
I1119 02:33:20.040485 315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1119 02:33:20.040520 315363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
I1119 02:33:20.040528 315363 cache.go:65] Caching tarball of preloaded images
I1119 02:33:20.040583 315363 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon
I1119 02:33:20.040607 315363 preload.go:238] Found /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I1119 02:33:20.040616 315363 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
I1119 02:33:20.040718 315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
I1119 02:33:20.040739 315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json: {Name:mk2c1cb92572f9f7372f9d895b2c58b49c99bb3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:20.063579 315363 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a in local docker daemon, skipping pull
I1119 02:33:20.063610 315363 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a exists in daemon, skipping load
I1119 02:33:20.063636 315363 cache.go:243] Successfully downloaded all kic artifacts
I1119 02:33:20.063670 315363 start.go:360] acquireMachinesLock for embed-certs-168452: {Name:mk4860299f8ff219c79992500844e49d455bd43a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1119 02:33:20.063790 315363 start.go:364] duration metric: took 102.461µs to acquireMachinesLock for "embed-certs-168452"
I1119 02:33:20.063835 315363 start.go:93] Provisioning new machine with config: &{Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1119 02:33:20.063944 315363 start.go:125] createHost starting for "" (driver="docker")
I1119 02:33:19.995882 301934 addons.go:515] duration metric: took 741.418352ms for enable addons: enabled=[storage-provisioner default-storageclass]
I1119 02:33:20.065989 315363 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1119 02:33:20.066193 315363 start.go:159] libmachine.API.Create for "embed-certs-168452" (driver="docker")
I1119 02:33:20.066226 315363 client.go:173] LocalClient.Create starting
I1119 02:33:20.066306 315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem
I1119 02:33:20.066338 315363 main.go:143] libmachine: Decoding PEM data...
I1119 02:33:20.066360 315363 main.go:143] libmachine: Parsing certificate...
I1119 02:33:20.066438 315363 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem
I1119 02:33:20.066464 315363 main.go:143] libmachine: Decoding PEM data...
I1119 02:33:20.066475 315363 main.go:143] libmachine: Parsing certificate...
I1119 02:33:20.066835 315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1119 02:33:20.087889 315363 cli_runner.go:211] docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1119 02:33:20.087987 315363 network_create.go:284] running [docker network inspect embed-certs-168452] to gather additional debugging logs...
I1119 02:33:20.088020 315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452
W1119 02:33:20.108512 315363 cli_runner.go:211] docker network inspect embed-certs-168452 returned with exit code 1
I1119 02:33:20.108553 315363 network_create.go:287] error running [docker network inspect embed-certs-168452]: docker network inspect embed-certs-168452: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-168452 not found
I1119 02:33:20.108577 315363 network_create.go:289] output of [docker network inspect embed-certs-168452]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-168452 not found
** /stderr **
I1119 02:33:20.108677 315363 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 02:33:20.129985 315363 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed39016f2aa9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:16:a0:62:5a:51} reservation:<nil>}
I1119 02:33:20.130640 315363 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-42b0c19d513b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b2:bf:ca:ce:21:95} reservation:<nil>}
I1119 02:33:20.131454 315363 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-002e39e6dc05 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:8e:34:24:50:a5} reservation:<nil>}
I1119 02:33:20.132210 315363 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1155ea75a94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:76:37:ad:5a:d8:36} reservation:<nil>}
I1119 02:33:20.133253 315363 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3ec6f45a7001 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:12:9a:69:49:8b:1f} reservation:<nil>}
I1119 02:33:20.134343 315363 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ddf580}
I1119 02:33:20.134393 315363 network_create.go:124] attempt to create docker network embed-certs-168452 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
I1119 02:33:20.134459 315363 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-168452 embed-certs-168452
I1119 02:33:20.192566 315363 network_create.go:108] docker network embed-certs-168452 192.168.94.0/24 created
I1119 02:33:20.192597 315363 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-168452" container
I1119 02:33:20.192665 315363 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1119 02:33:20.216991 315363 cli_runner.go:164] Run: docker volume create embed-certs-168452 --label name.minikube.sigs.k8s.io=embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true
I1119 02:33:20.240868 315363 oci.go:103] Successfully created a docker volume embed-certs-168452
I1119 02:33:20.240948 315363 cli_runner.go:164] Run: docker run --rm --name embed-certs-168452-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --entrypoint /usr/bin/test -v embed-certs-168452:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -d /var/lib
I1119 02:33:20.653772 315363 oci.go:107] Successfully prepared a docker volume embed-certs-168452
I1119 02:33:20.653851 315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1119 02:33:20.653866 315363 kic.go:194] Starting extracting preloaded images to volume ...
I1119 02:33:20.653963 315363 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir
I1119 02:33:20.215687 301934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-691094" context rescaled to 1 replicas
W1119 02:33:21.715210 301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
W1119 02:33:24.323644 301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
I1119 02:33:28.147893 307222 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1119 02:33:28.147982 307222 kubeadm.go:319] [preflight] Running pre-flight checks
I1119 02:33:28.148104 307222 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1119 02:33:28.148201 307222 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1043-gcp[0m
I1119 02:33:28.148256 307222 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1119 02:33:28.148332 307222 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1119 02:33:28.148450 307222 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1119 02:33:28.148522 307222 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1119 02:33:28.148596 307222 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1119 02:33:28.148672 307222 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1119 02:33:28.148756 307222 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1119 02:33:28.148841 307222 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1119 02:33:28.148915 307222 kubeadm.go:319] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1119 02:33:28.149019 307222 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1119 02:33:28.149159 307222 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1119 02:33:28.149311 307222 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1119 02:33:28.149421 307222 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1119 02:33:28.151537 307222 out.go:252] - Generating certificates and keys ...
I1119 02:33:28.151647 307222 kubeadm.go:319] [certs] Using existing ca certificate authority
I1119 02:33:28.151774 307222 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1119 02:33:28.151834 307222 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1119 02:33:28.151902 307222 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1119 02:33:28.152000 307222 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1119 02:33:28.152068 307222 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1119 02:33:28.152179 307222 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1119 02:33:28.152343 307222 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
I1119 02:33:28.152451 307222 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1119 02:33:28.152598 307222 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-483142] and IPs [192.168.76.2 127.0.0.1 ::1]
I1119 02:33:28.152690 307222 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1119 02:33:28.152796 307222 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1119 02:33:28.152837 307222 kubeadm.go:319] [certs] Generating "sa" key and public key
I1119 02:33:28.152894 307222 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1119 02:33:28.152945 307222 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1119 02:33:28.153002 307222 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1119 02:33:28.153051 307222 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1119 02:33:28.153118 307222 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1119 02:33:28.153171 307222 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1119 02:33:28.153255 307222 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1119 02:33:28.153358 307222 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1119 02:33:28.154609 307222 out.go:252] - Booting up control plane ...
I1119 02:33:28.154709 307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1119 02:33:28.154821 307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1119 02:33:28.154904 307222 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1119 02:33:28.155033 307222 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1119 02:33:28.155173 307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1119 02:33:28.155323 307222 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1119 02:33:28.155456 307222 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1119 02:33:28.155501 307222 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1119 02:33:28.155631 307222 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1119 02:33:28.155728 307222 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1119 02:33:28.155805 307222 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001464049s
I1119 02:33:28.155906 307222 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1119 02:33:28.156017 307222 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
I1119 02:33:28.156135 307222 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1119 02:33:28.156242 307222 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1119 02:33:28.156335 307222 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.319882231s
I1119 02:33:28.156456 307222 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.432703999s
I1119 02:33:28.156560 307222 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001475545s
I1119 02:33:28.156685 307222 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1119 02:33:28.156832 307222 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1119 02:33:28.156917 307222 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1119 02:33:28.157202 307222 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-483142 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1119 02:33:28.157272 307222 kubeadm.go:319] [bootstrap-token] Using token: nwrx92.0c942uuundzydmcz
I1119 02:33:28.159046 307222 out.go:252] - Configuring RBAC rules ...
I1119 02:33:28.159207 307222 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1119 02:33:28.159328 307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1119 02:33:28.159549 307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1119 02:33:28.159720 307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1119 02:33:28.159922 307222 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1119 02:33:28.160077 307222 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1119 02:33:28.160254 307222 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1119 02:33:28.160329 307222 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1119 02:33:28.160427 307222 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1119 02:33:28.160443 307222 kubeadm.go:319]
I1119 02:33:28.160527 307222 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1119 02:33:28.160536 307222 kubeadm.go:319]
I1119 02:33:28.160603 307222 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1119 02:33:28.160610 307222 kubeadm.go:319]
I1119 02:33:28.160642 307222 kubeadm.go:319] mkdir -p $HOME/.kube
I1119 02:33:28.160730 307222 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1119 02:33:28.160832 307222 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1119 02:33:28.160845 307222 kubeadm.go:319]
I1119 02:33:28.160922 307222 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1119 02:33:28.160942 307222 kubeadm.go:319]
I1119 02:33:28.161016 307222 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1119 02:33:28.161031 307222 kubeadm.go:319]
I1119 02:33:28.161114 307222 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1119 02:33:28.161229 307222 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1119 02:33:28.161347 307222 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1119 02:33:28.161359 307222 kubeadm.go:319]
I1119 02:33:28.161531 307222 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1119 02:33:28.161656 307222 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1119 02:33:28.161665 307222 kubeadm.go:319]
I1119 02:33:28.161797 307222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
I1119 02:33:28.161968 307222 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
I1119 02:33:28.162022 307222 kubeadm.go:319] --control-plane
I1119 02:33:28.162036 307222 kubeadm.go:319]
I1119 02:33:28.162163 307222 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1119 02:33:28.162174 307222 kubeadm.go:319]
I1119 02:33:28.162301 307222 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nwrx92.0c942uuundzydmcz \
I1119 02:33:28.162456 307222 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a
I1119 02:33:28.162469 307222 cni.go:84] Creating CNI manager for ""
I1119 02:33:28.162475 307222 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1119 02:33:28.164382 307222 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1119 02:33:25.786283 315363 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21924-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-168452:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a -I lz4 -xf /preloaded.tar -C /extractDir: (5.132274902s)
I1119 02:33:25.786322 315363 kic.go:203] duration metric: took 5.132452147s to extract preloaded images to volume ...
W1119 02:33:25.786460 315363 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1119 02:33:25.786504 315363 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1119 02:33:25.786554 315363 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1119 02:33:25.853413 315363 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-168452 --name embed-certs-168452 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-168452 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-168452 --network embed-certs-168452 --ip 192.168.94.2 --volume embed-certs-168452:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a
I1119 02:33:26.238651 315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Running}}
I1119 02:33:26.261169 315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
I1119 02:33:26.284313 315363 cli_runner.go:164] Run: docker exec embed-certs-168452 stat /var/lib/dpkg/alternatives/iptables
I1119 02:33:26.336955 315363 oci.go:144] the created container "embed-certs-168452" has a running status.
I1119 02:33:26.336985 315363 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa...
I1119 02:33:26.484310 315363 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1119 02:33:26.517116 315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
I1119 02:33:26.542901 315363 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1119 02:33:26.542925 315363 kic_runner.go:114] Args: [docker exec --privileged embed-certs-168452 chown docker:docker /home/docker/.ssh/authorized_keys]
I1119 02:33:26.595205 315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
I1119 02:33:26.623359 315363 machine.go:94] provisionDockerMachine start ...
I1119 02:33:26.623527 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:26.646254 315363 main.go:143] libmachine: Using SSH client type: native
I1119 02:33:26.646550 315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 127.0.0.1 33105 <nil> <nil>}
I1119 02:33:26.646569 315363 main.go:143] libmachine: About to run SSH command:
hostname
I1119 02:33:26.799221 315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
I1119 02:33:26.799250 315363 ubuntu.go:182] provisioning hostname "embed-certs-168452"
I1119 02:33:26.799334 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:26.820929 315363 main.go:143] libmachine: Using SSH client type: native
I1119 02:33:26.821188 315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 127.0.0.1 33105 <nil> <nil>}
I1119 02:33:26.821210 315363 main.go:143] libmachine: About to run SSH command:
sudo hostname embed-certs-168452 && echo "embed-certs-168452" | sudo tee /etc/hostname
I1119 02:33:26.966035 315363 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-168452
I1119 02:33:26.966125 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:26.985276 315363 main.go:143] libmachine: Using SSH client type: native
I1119 02:33:26.985598 315363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil> [] 0s} 127.0.0.1 33105 <nil> <nil>}
I1119 02:33:26.985633 315363 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-168452' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-168452/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-168452' | sudo tee -a /etc/hosts;
fi
fi
I1119 02:33:27.121670 315363 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1119 02:33:27.121703 315363 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21924-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21924-11107/.minikube}
I1119 02:33:27.121727 315363 ubuntu.go:190] setting up certificates
I1119 02:33:27.123000 315363 provision.go:84] configureAuth start
I1119 02:33:27.123195 315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
I1119 02:33:27.143490 315363 provision.go:143] copyHostCerts
I1119 02:33:27.143570 315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem, removing ...
I1119 02:33:27.143580 315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem
I1119 02:33:27.143645 315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/ca.pem (1082 bytes)
I1119 02:33:27.143736 315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem, removing ...
I1119 02:33:27.143744 315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem
I1119 02:33:27.143773 315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/cert.pem (1123 bytes)
I1119 02:33:27.143829 315363 exec_runner.go:144] found /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem, removing ...
I1119 02:33:27.143835 315363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem
I1119 02:33:27.143858 315363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21924-11107/.minikube/key.pem (1675 bytes)
I1119 02:33:27.143923 315363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem org=jenkins.embed-certs-168452 san=[127.0.0.1 192.168.94.2 embed-certs-168452 localhost minikube]
I1119 02:33:27.239080 315363 provision.go:177] copyRemoteCerts
I1119 02:33:27.239165 315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1119 02:33:27.239198 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:27.262397 315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
I1119 02:33:27.362967 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1119 02:33:27.387666 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1119 02:33:27.418735 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1119 02:33:27.446098 315363 provision.go:87] duration metric: took 323.082791ms to configureAuth
I1119 02:33:27.446129 315363 ubuntu.go:206] setting minikube options for container-runtime
I1119 02:33:27.446313 315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:33:27.446327 315363 machine.go:97] duration metric: took 822.891862ms to provisionDockerMachine
I1119 02:33:27.446333 315363 client.go:176] duration metric: took 7.38010166s to LocalClient.Create
I1119 02:33:27.446351 315363 start.go:167] duration metric: took 7.380160884s to libmachine.API.Create "embed-certs-168452"
I1119 02:33:27.446358 315363 start.go:293] postStartSetup for "embed-certs-168452" (driver="docker")
I1119 02:33:27.446409 315363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1119 02:33:27.446465 315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1119 02:33:27.446501 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:27.470807 315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
I1119 02:33:27.575097 315363 ssh_runner.go:195] Run: cat /etc/os-release
I1119 02:33:27.580067 315363 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1119 02:33:27.580102 315363 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1119 02:33:27.580115 315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/addons for local assets ...
I1119 02:33:27.580188 315363 filesync.go:126] Scanning /home/jenkins/minikube-integration/21924-11107/.minikube/files for local assets ...
I1119 02:33:27.580303 315363 filesync.go:149] local asset: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem -> 146572.pem in /etc/ssl/certs
I1119 02:33:27.580434 315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1119 02:33:27.588848 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /etc/ssl/certs/146572.pem (1708 bytes)
I1119 02:33:27.611498 315363 start.go:296] duration metric: took 165.12815ms for postStartSetup
I1119 02:33:27.611895 315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
I1119 02:33:27.630987 315363 profile.go:143] Saving config to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/config.json ...
I1119 02:33:27.631276 315363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1119 02:33:27.631327 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:27.650599 315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
I1119 02:33:27.747119 315363 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1119 02:33:27.752242 315363 start.go:128] duration metric: took 7.68828048s to createHost
I1119 02:33:27.752270 315363 start.go:83] releasing machines lock for "embed-certs-168452", held for 7.688466151s
I1119 02:33:27.752448 315363 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-168452
I1119 02:33:27.772595 315363 ssh_runner.go:195] Run: cat /version.json
I1119 02:33:27.772634 315363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1119 02:33:27.772668 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:27.772695 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:27.795020 315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
I1119 02:33:27.795311 315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
I1119 02:33:27.889466 315363 ssh_runner.go:195] Run: systemctl --version
I1119 02:33:27.948057 315363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1119 02:33:27.953270 315363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1119 02:33:27.953328 315363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1119 02:33:27.979962 315363 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1119 02:33:27.979983 315363 start.go:496] detecting cgroup driver to use...
I1119 02:33:27.980013 315363 detect.go:190] detected "systemd" cgroup driver on host os
I1119 02:33:27.980050 315363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1119 02:33:27.995148 315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1119 02:33:28.009176 315363 docker.go:218] disabling cri-docker service (if available) ...
I1119 02:33:28.009239 315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1119 02:33:28.028120 315363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1119 02:33:28.047654 315363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1119 02:33:28.137742 315363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1119 02:33:28.233503 315363 docker.go:234] disabling docker service ...
I1119 02:33:28.233569 315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1119 02:33:28.254546 315363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1119 02:33:28.270970 315363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1119 02:33:28.372358 315363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1119 02:33:28.475816 315363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1119 02:33:28.494447 315363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1119 02:33:28.514112 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1119 02:33:28.528713 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1119 02:33:28.542307 315363 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1119 02:33:28.542395 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1119 02:33:28.553682 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1119 02:33:28.564425 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1119 02:33:28.574563 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1119 02:33:28.585047 315363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1119 02:33:28.594876 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1119 02:33:28.606066 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1119 02:33:28.616549 315363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1119 02:33:28.627283 315363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1119 02:33:28.635846 315363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1119 02:33:28.643854 315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1119 02:33:28.727138 315363 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1119 02:33:28.825075 315363 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1119 02:33:28.825141 315363 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1119 02:33:28.829886 315363 start.go:564] Will wait 60s for crictl version
I1119 02:33:28.829954 315363 ssh_runner.go:195] Run: which crictl
I1119 02:33:28.834062 315363 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1119 02:33:28.859386 315363 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.1.5
RuntimeApiVersion: v1
I1119 02:33:28.859454 315363 ssh_runner.go:195] Run: containerd --version
I1119 02:33:28.881932 315363 ssh_runner.go:195] Run: containerd --version
I1119 02:33:28.905418 315363 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
I1119 02:33:28.906851 315363 cli_runner.go:164] Run: docker network inspect embed-certs-168452 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 02:33:28.925576 315363 ssh_runner.go:195] Run: grep 192.168.94.1 host.minikube.internal$ /etc/hosts
I1119 02:33:28.930043 315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1119 02:33:28.941472 315363 kubeadm.go:884] updating cluster {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1119 02:33:28.941570 315363 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1119 02:33:28.941633 315363 ssh_runner.go:195] Run: sudo crictl images --output json
I1119 02:33:28.969084 315363 containerd.go:627] all images are preloaded for containerd runtime.
I1119 02:33:28.969102 315363 containerd.go:534] Images already preloaded, skipping extraction
I1119 02:33:28.969159 315363 ssh_runner.go:195] Run: sudo crictl images --output json
I1119 02:33:28.994529 315363 containerd.go:627] all images are preloaded for containerd runtime.
I1119 02:33:28.994549 315363 cache_images.go:86] Images are preloaded, skipping loading
I1119 02:33:28.994556 315363 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
I1119 02:33:28.994637 315363 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-168452 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1119 02:33:28.994694 315363 ssh_runner.go:195] Run: sudo crictl info
I1119 02:33:29.023174 315363 cni.go:84] Creating CNI manager for ""
I1119 02:33:29.023197 315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1119 02:33:29.023211 315363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1119 02:33:29.023232 315363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-168452 NodeName:embed-certs-168452 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1119 02:33:29.023337 315363 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.94.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-168452"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.94.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1119 02:33:29.023423 315363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1119 02:33:29.032358 315363 binaries.go:51] Found k8s binaries, skipping transfer
I1119 02:33:29.032438 315363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1119 02:33:29.041206 315363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I1119 02:33:29.056159 315363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1119 02:33:29.074583 315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
I1119 02:33:29.089316 315363 ssh_runner.go:195] Run: grep 192.168.94.2 control-plane.minikube.internal$ /etc/hosts
I1119 02:33:29.093854 315363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1119 02:33:29.106602 315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1119 02:33:29.193818 315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1119 02:33:29.220027 315363 certs.go:69] Setting up /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452 for IP: 192.168.94.2
I1119 02:33:29.220053 315363 certs.go:195] generating shared ca certs ...
I1119 02:33:29.220075 315363 certs.go:227] acquiring lock for ca certs: {Name:mk11d6789b2333e17b3937495b501fbcca15c242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:29.220231 315363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key
I1119 02:33:29.220278 315363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key
I1119 02:33:29.220287 315363 certs.go:257] generating profile certs ...
I1119 02:33:29.220334 315363 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key
I1119 02:33:29.220351 315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt with IP's: []
I1119 02:33:29.496773 315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt ...
I1119 02:33:29.496800 315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.crt: {Name:mkdb5e24f9c8b0d3d9849ba91ac24e28be0abdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:29.496993 315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key ...
I1119 02:33:29.497006 315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/client.key: {Name:mk5aa88fe9180cc5f94c07d5a968428b4ccf37cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:29.497088 315363 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2
I1119 02:33:29.497102 315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
W1119 02:33:26.721525 301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
W1119 02:33:29.215940 301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
I1119 02:33:28.165835 307222 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1119 02:33:28.176028 307222 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
I1119 02:33:28.176052 307222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I1119 02:33:28.195615 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1119 02:33:28.450816 307222 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1119 02:33:28.450899 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:28.450933 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-483142 minikube.k8s.io/updated_at=2025_11_19T02_33_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=no-preload-483142 minikube.k8s.io/primary=true
I1119 02:33:28.538275 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:28.538445 307222 ops.go:34] apiserver oom_adj: -16
I1119 02:33:29.038968 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:29.539224 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:30.038530 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:30.539271 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:31.038434 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:31.538496 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:32.038945 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:32.539001 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:33.038571 307222 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:33.129034 307222 kubeadm.go:1114] duration metric: took 4.678195875s to wait for elevateKubeSystemPrivileges
I1119 02:33:33.129095 307222 kubeadm.go:403] duration metric: took 17.40558167s to StartCluster
I1119 02:33:33.129119 307222 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:33.129202 307222 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21924-11107/kubeconfig
I1119 02:33:33.131182 307222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:33.131481 307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1119 02:33:33.131519 307222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1119 02:33:33.131585 307222 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1119 02:33:33.131706 307222 addons.go:70] Setting storage-provisioner=true in profile "no-preload-483142"
I1119 02:33:33.131748 307222 config.go:182] Loaded profile config "no-preload-483142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:33:33.131794 307222 addons.go:70] Setting default-storageclass=true in profile "no-preload-483142"
I1119 02:33:33.131827 307222 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-483142"
I1119 02:33:33.131810 307222 addons.go:239] Setting addon storage-provisioner=true in "no-preload-483142"
I1119 02:33:33.131959 307222 host.go:66] Checking if "no-preload-483142" exists ...
I1119 02:33:33.132200 307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
I1119 02:33:33.132480 307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
I1119 02:33:33.134152 307222 out.go:179] * Verifying Kubernetes components...
I1119 02:33:33.135585 307222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1119 02:33:33.159834 307222 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:33:33.160479 307222 addons.go:239] Setting addon default-storageclass=true in "no-preload-483142"
I1119 02:33:33.160545 307222 host.go:66] Checking if "no-preload-483142" exists ...
I1119 02:33:33.161017 307222 cli_runner.go:164] Run: docker container inspect no-preload-483142 --format={{.State.Status}}
I1119 02:33:33.161390 307222 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1119 02:33:33.161410 307222 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1119 02:33:33.161458 307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
I1119 02:33:33.198354 307222 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1119 02:33:33.198390 307222 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1119 02:33:33.198448 307222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-483142
I1119 02:33:33.198522 307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
I1119 02:33:33.223657 307222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33100 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/no-preload-483142/id_rsa Username:docker}
I1119 02:33:33.248952 307222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1119 02:33:33.322673 307222 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1119 02:33:33.348662 307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1119 02:33:33.354901 307222 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1119 02:33:33.503051 307222 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I1119 02:33:33.504327 307222 node_ready.go:35] waiting up to 6m0s for node "no-preload-483142" to be "Ready" ...
I1119 02:33:33.756829 307222 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1119 02:33:29.844643 315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 ...
I1119 02:33:29.844667 315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2: {Name:mk1596cf7137a998e277abf94c4c839907009a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:29.844872 315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 ...
I1119 02:33:29.844901 315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2: {Name:mk9d817ab63555ebb02e0590916ce23352cf008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:29.845022 315363 certs.go:382] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt
I1119 02:33:29.845144 315363 certs.go:386] copying /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key.0e221cc2 -> /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key
I1119 02:33:29.845239 315363 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key
I1119 02:33:29.845260 315363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt with IP's: []
I1119 02:33:30.013529 315363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt ...
I1119 02:33:30.013564 315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt: {Name:mka683634a30ab1845434f0fc49f75059694b447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:30.013775 315363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key ...
I1119 02:33:30.013796 315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key: {Name:mk9e8dbde74fbcae5bb0e966570ae4f43c6f07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:30.014054 315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem (1338 bytes)
W1119 02:33:30.014108 315363 certs.go:480] ignoring /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657_empty.pem, impossibly tiny 0 bytes
I1119 02:33:30.014124 315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca-key.pem (1675 bytes)
I1119 02:33:30.014183 315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/ca.pem (1082 bytes)
I1119 02:33:30.014219 315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/cert.pem (1123 bytes)
I1119 02:33:30.014257 315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/certs/key.pem (1675 bytes)
I1119 02:33:30.014318 315363 certs.go:484] found cert: /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem (1708 bytes)
I1119 02:33:30.014986 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1119 02:33:30.034798 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1119 02:33:30.054155 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1119 02:33:30.074272 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1119 02:33:30.094396 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I1119 02:33:30.114605 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1119 02:33:30.133991 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1119 02:33:30.153105 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/profiles/embed-certs-168452/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1119 02:33:30.172052 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/files/etc/ssl/certs/146572.pem --> /usr/share/ca-certificates/146572.pem (1708 bytes)
I1119 02:33:30.194139 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1119 02:33:30.212546 315363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21924-11107/.minikube/certs/14657.pem --> /usr/share/ca-certificates/14657.pem (1338 bytes)
I1119 02:33:30.231534 315363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1119 02:33:30.246493 315363 ssh_runner.go:195] Run: openssl version
I1119 02:33:30.252586 315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146572.pem && ln -fs /usr/share/ca-certificates/146572.pem /etc/ssl/certs/146572.pem"
I1119 02:33:30.261620 315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146572.pem
I1119 02:33:30.265824 315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 02:02 /usr/share/ca-certificates/146572.pem
I1119 02:33:30.265886 315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146572.pem
I1119 02:33:30.301164 315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/146572.pem /etc/ssl/certs/3ec20f2e.0"
I1119 02:33:30.310429 315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1119 02:33:30.319818 315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1119 02:33:30.323998 315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 01:57 /usr/share/ca-certificates/minikubeCA.pem
I1119 02:33:30.324046 315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1119 02:33:30.360567 315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1119 02:33:30.370492 315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14657.pem && ln -fs /usr/share/ca-certificates/14657.pem /etc/ssl/certs/14657.pem"
I1119 02:33:30.380695 315363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14657.pem
I1119 02:33:30.385171 315363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 02:02 /usr/share/ca-certificates/14657.pem
I1119 02:33:30.385241 315363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14657.pem
I1119 02:33:30.422375 315363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14657.pem /etc/ssl/certs/51391683.0"
I1119 02:33:30.432329 315363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1119 02:33:30.436333 315363 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1119 02:33:30.436432 315363 kubeadm.go:401] StartCluster: {Name:embed-certs-168452 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763507788-21924@sha256:1e20c07242571f3eb6bbb213b88269c923b5578034662e07409047e7102bdd1a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-168452 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1119 02:33:30.436494 315363 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1119 02:33:30.436588 315363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1119 02:33:30.465191 315363 cri.go:89] found id: ""
I1119 02:33:30.465255 315363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1119 02:33:30.474328 315363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1119 02:33:30.483132 315363 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1119 02:33:30.483196 315363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1119 02:33:30.491249 315363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1119 02:33:30.491272 315363 kubeadm.go:158] found existing configuration files:
I1119 02:33:30.491320 315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1119 02:33:30.499072 315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1119 02:33:30.499140 315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1119 02:33:30.507018 315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1119 02:33:30.514836 315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1119 02:33:30.514890 315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1119 02:33:30.523396 315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1119 02:33:30.532721 315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1119 02:33:30.532772 315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1119 02:33:30.541409 315363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1119 02:33:30.550090 315363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1119 02:33:30.550157 315363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1119 02:33:30.558693 315363 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1119 02:33:30.636057 315363 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
I1119 02:33:30.702518 315363 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1119 02:33:31.715333 301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
W1119 02:33:33.715963 301934 node_ready.go:57] node "old-k8s-version-691094" has "Ready":"False" status (will retry)
I1119 02:33:34.216972 301934 node_ready.go:49] node "old-k8s-version-691094" is "Ready"
I1119 02:33:34.217010 301934 node_ready.go:38] duration metric: took 14.505569399s for node "old-k8s-version-691094" to be "Ready" ...
I1119 02:33:34.217027 301934 api_server.go:52] waiting for apiserver process to appear ...
I1119 02:33:34.217083 301934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1119 02:33:34.235995 301934 api_server.go:72] duration metric: took 14.98160502s to wait for apiserver process to appear ...
I1119 02:33:34.236024 301934 api_server.go:88] waiting for apiserver healthz status ...
I1119 02:33:34.236046 301934 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
I1119 02:33:34.242612 301934 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
ok
I1119 02:33:34.244469 301934 api_server.go:141] control plane version: v1.28.0
I1119 02:33:34.244501 301934 api_server.go:131] duration metric: took 8.468136ms to wait for apiserver health ...
I1119 02:33:34.244512 301934 system_pods.go:43] waiting for kube-system pods to appear ...
I1119 02:33:34.249250 301934 system_pods.go:59] 8 kube-system pods found
I1119 02:33:34.249293 301934 system_pods.go:61] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:34.249301 301934 system_pods.go:61] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
I1119 02:33:34.249308 301934 system_pods.go:61] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
I1119 02:33:34.249326 301934 system_pods.go:61] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
I1119 02:33:34.249331 301934 system_pods.go:61] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
I1119 02:33:34.249336 301934 system_pods.go:61] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
I1119 02:33:34.249340 301934 system_pods.go:61] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
I1119 02:33:34.249347 301934 system_pods.go:61] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:34.249389 301934 system_pods.go:74] duration metric: took 4.842718ms to wait for pod list to return data ...
I1119 02:33:34.249403 301934 default_sa.go:34] waiting for default service account to be created ...
I1119 02:33:34.251979 301934 default_sa.go:45] found service account: "default"
I1119 02:33:34.252000 301934 default_sa.go:55] duration metric: took 2.59102ms for default service account to be created ...
I1119 02:33:34.252008 301934 system_pods.go:116] waiting for k8s-apps to be running ...
I1119 02:33:34.256098 301934 system_pods.go:86] 8 kube-system pods found
I1119 02:33:34.256141 301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:34.256148 301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
I1119 02:33:34.256155 301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
I1119 02:33:34.256158 301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
I1119 02:33:34.256163 301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
I1119 02:33:34.256166 301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
I1119 02:33:34.256169 301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
I1119 02:33:34.256173 301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:34.256204 301934 retry.go:31] will retry after 294.08163ms: missing components: kube-dns
I1119 02:33:34.555117 301934 system_pods.go:86] 8 kube-system pods found
I1119 02:33:34.555149 301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:34.555155 301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
I1119 02:33:34.555160 301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
I1119 02:33:34.555164 301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
I1119 02:33:34.555168 301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
I1119 02:33:34.555171 301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
I1119 02:33:34.555174 301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
I1119 02:33:34.555181 301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:34.555200 301934 retry.go:31] will retry after 239.208285ms: missing components: kube-dns
I1119 02:33:34.801314 301934 system_pods.go:86] 8 kube-system pods found
I1119 02:33:34.801356 301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:34.801397 301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
I1119 02:33:34.801408 301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
I1119 02:33:34.801414 301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
I1119 02:33:34.801421 301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
I1119 02:33:34.801426 301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
I1119 02:33:34.801432 301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
I1119 02:33:34.801446 301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:34.801465 301934 retry.go:31] will retry after 406.320974ms: missing components: kube-dns
I1119 02:33:33.758898 307222 addons.go:515] duration metric: took 627.311179ms for enable addons: enabled=[storage-provisioner default-storageclass]
I1119 02:33:34.007122 307222 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-483142" context rescaled to 1 replicas
W1119 02:33:35.507777 307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
I1119 02:33:35.212153 301934 system_pods.go:86] 8 kube-system pods found
I1119 02:33:35.212193 301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:35.212202 301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
I1119 02:33:35.212208 301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
I1119 02:33:35.212214 301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
I1119 02:33:35.212221 301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
I1119 02:33:35.212226 301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
I1119 02:33:35.212230 301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
I1119 02:33:35.212235 301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
I1119 02:33:35.212252 301934 retry.go:31] will retry after 502.533324ms: missing components: kube-dns
I1119 02:33:35.719172 301934 system_pods.go:86] 8 kube-system pods found
I1119 02:33:35.719211 301934 system_pods.go:89] "coredns-5dd5756b68-bbvqz" [56c0e21e-9d86-46c6-bc02-2a75554c0f07] Running
I1119 02:33:35.719220 301934 system_pods.go:89] "etcd-old-k8s-version-691094" [76abba5a-69d9-4c2c-aeb1-0225fad93a1d] Running
I1119 02:33:35.719225 301934 system_pods.go:89] "kindnet-b9cwh" [3d3b11e4-1f4a-4f5c-a9ad-0b6854cc0352] Running
I1119 02:33:35.719231 301934 system_pods.go:89] "kube-apiserver-old-k8s-version-691094" [f47acebe-490a-4f18-bb22-ab3375a572a9] Running
I1119 02:33:35.719238 301934 system_pods.go:89] "kube-controller-manager-old-k8s-version-691094" [b1768141-5e2e-4b98-9747-6d96a1e4f121] Running
I1119 02:33:35.719243 301934 system_pods.go:89] "kube-proxy-79df5" [d23dd2d3-6511-45fb-ae70-d1da7b9b6b28] Running
I1119 02:33:35.719248 301934 system_pods.go:89] "kube-scheduler-old-k8s-version-691094" [ac8b4be1-254b-4014-93ea-35ba777dc762] Running
I1119 02:33:35.719254 301934 system_pods.go:89] "storage-provisioner" [135636ea-f34f-4bfc-b2f6-cbbf3e91ca30] Running
I1119 02:33:35.719267 301934 system_pods.go:126] duration metric: took 1.46725409s to wait for k8s-apps to be running ...
I1119 02:33:35.719280 301934 system_svc.go:44] waiting for kubelet service to be running ....
I1119 02:33:35.719333 301934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1119 02:33:35.733944 301934 system_svc.go:56] duration metric: took 14.654804ms WaitForService to wait for kubelet
I1119 02:33:35.733974 301934 kubeadm.go:587] duration metric: took 16.479589704s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1119 02:33:35.733994 301934 node_conditions.go:102] verifying NodePressure condition ...
I1119 02:33:35.736881 301934 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1119 02:33:35.736904 301934 node_conditions.go:123] node cpu capacity is 8
I1119 02:33:35.736917 301934 node_conditions.go:105] duration metric: took 2.917087ms to run NodePressure ...
I1119 02:33:35.736947 301934 start.go:242] waiting for startup goroutines ...
I1119 02:33:35.736956 301934 start.go:247] waiting for cluster config update ...
I1119 02:33:35.736966 301934 start.go:256] writing updated cluster config ...
I1119 02:33:35.737252 301934 ssh_runner.go:195] Run: rm -f paused
I1119 02:33:35.741706 301934 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1119 02:33:35.746693 301934 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:35.751796 301934 pod_ready.go:94] pod "coredns-5dd5756b68-bbvqz" is "Ready"
I1119 02:33:35.751821 301934 pod_ready.go:86] duration metric: took 5.102077ms for pod "coredns-5dd5756b68-bbvqz" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:35.754811 301934 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:35.759826 301934 pod_ready.go:94] pod "etcd-old-k8s-version-691094" is "Ready"
I1119 02:33:35.759852 301934 pod_ready.go:86] duration metric: took 5.017899ms for pod "etcd-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:35.763701 301934 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:35.768670 301934 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-691094" is "Ready"
I1119 02:33:35.768693 301934 pod_ready.go:86] duration metric: took 4.969901ms for pod "kube-apiserver-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:35.772227 301934 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:36.146684 301934 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-691094" is "Ready"
I1119 02:33:36.146718 301934 pod_ready.go:86] duration metric: took 374.468133ms for pod "kube-controller-manager-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:36.347472 301934 pod_ready.go:83] waiting for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:36.746791 301934 pod_ready.go:94] pod "kube-proxy-79df5" is "Ready"
I1119 02:33:36.746855 301934 pod_ready.go:86] duration metric: took 399.347819ms for pod "kube-proxy-79df5" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:36.946961 301934 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:37.347059 301934 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-691094" is "Ready"
I1119 02:33:37.347090 301934 pod_ready.go:86] duration metric: took 400.10454ms for pod "kube-scheduler-old-k8s-version-691094" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:37.347108 301934 pod_ready.go:40] duration metric: took 1.605370699s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1119 02:33:37.406793 301934 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
I1119 02:33:37.408685 301934 out.go:203]
W1119 02:33:37.410052 301934 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
I1119 02:33:37.411691 301934 out.go:179] - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
I1119 02:33:37.413481 301934 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-691094" cluster and "default" namespace by default
W1119 02:33:37.511440 307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
W1119 02:33:40.007282 307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
I1119 02:33:42.519187 315363 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1119 02:33:42.519270 315363 kubeadm.go:319] [preflight] Running pre-flight checks
I1119 02:33:42.519471 315363 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1119 02:33:42.519558 315363 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1043-gcp[0m
I1119 02:33:42.519641 315363 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1119 02:33:42.519723 315363 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1119 02:33:42.519793 315363 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1119 02:33:42.519863 315363 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1119 02:33:42.519937 315363 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1119 02:33:42.520011 315363 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1119 02:33:42.520082 315363 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1119 02:33:42.520161 315363 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1119 02:33:42.520246 315363 kubeadm.go:319] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1119 02:33:42.520396 315363 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1119 02:33:42.520528 315363 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1119 02:33:42.520640 315363 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1119 02:33:42.520739 315363 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1119 02:33:42.522619 315363 out.go:252] - Generating certificates and keys ...
I1119 02:33:42.522717 315363 kubeadm.go:319] [certs] Using existing ca certificate authority
I1119 02:33:42.522778 315363 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1119 02:33:42.522841 315363 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1119 02:33:42.522898 315363 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1119 02:33:42.522948 315363 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1119 02:33:42.522986 315363 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1119 02:33:42.523065 315363 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1119 02:33:42.523231 315363 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
I1119 02:33:42.523301 315363 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1119 02:33:42.523451 315363 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-168452 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
I1119 02:33:42.523527 315363 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1119 02:33:42.523599 315363 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1119 02:33:42.523658 315363 kubeadm.go:319] [certs] Generating "sa" key and public key
I1119 02:33:42.523737 315363 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1119 02:33:42.523787 315363 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1119 02:33:42.523833 315363 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1119 02:33:42.523879 315363 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1119 02:33:42.523945 315363 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1119 02:33:42.524004 315363 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1119 02:33:42.524082 315363 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1119 02:33:42.524137 315363 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1119 02:33:42.525751 315363 out.go:252] - Booting up control plane ...
I1119 02:33:42.525831 315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1119 02:33:42.525893 315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1119 02:33:42.525997 315363 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1119 02:33:42.526121 315363 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1119 02:33:42.526235 315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1119 02:33:42.526323 315363 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1119 02:33:42.526401 315363 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1119 02:33:42.526441 315363 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1119 02:33:42.526546 315363 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1119 02:33:42.526633 315363 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1119 02:33:42.526684 315363 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001668097s
I1119 02:33:42.526759 315363 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1119 02:33:42.526828 315363 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
I1119 02:33:42.526912 315363 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1119 02:33:42.526979 315363 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1119 02:33:42.527060 315363 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.143588684s
I1119 02:33:42.527116 315363 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.751163591s
I1119 02:33:42.527185 315363 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002351229s
I1119 02:33:42.527279 315363 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1119 02:33:42.527418 315363 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1119 02:33:42.527475 315363 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1119 02:33:42.527642 315363 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-168452 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1119 02:33:42.527698 315363 kubeadm.go:319] [bootstrap-token] Using token: f9q4qi.t8dfm2zfbs2z2sgs
I1119 02:33:42.529100 315363 out.go:252] - Configuring RBAC rules ...
I1119 02:33:42.529232 315363 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1119 02:33:42.529348 315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1119 02:33:42.529576 315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1119 02:33:42.529779 315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1119 02:33:42.529949 315363 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1119 02:33:42.530070 315363 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1119 02:33:42.530217 315363 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1119 02:33:42.530321 315363 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1119 02:33:42.530403 315363 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1119 02:33:42.530413 315363 kubeadm.go:319]
I1119 02:33:42.530492 315363 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1119 02:33:42.530502 315363 kubeadm.go:319]
I1119 02:33:42.530604 315363 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1119 02:33:42.530618 315363 kubeadm.go:319]
I1119 02:33:42.530647 315363 kubeadm.go:319] mkdir -p $HOME/.kube
I1119 02:33:42.530726 315363 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1119 02:33:42.530797 315363 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1119 02:33:42.530809 315363 kubeadm.go:319]
I1119 02:33:42.530880 315363 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1119 02:33:42.530885 315363 kubeadm.go:319]
I1119 02:33:42.530954 315363 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1119 02:33:42.530981 315363 kubeadm.go:319]
I1119 02:33:42.531052 315363 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1119 02:33:42.531164 315363 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1119 02:33:42.531261 315363 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1119 02:33:42.531271 315363 kubeadm.go:319]
I1119 02:33:42.531424 315363 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1119 02:33:42.531551 315363 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1119 02:33:42.531570 315363 kubeadm.go:319]
I1119 02:33:42.531690 315363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
I1119 02:33:42.531850 315363 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a \
I1119 02:33:42.531878 315363 kubeadm.go:319] --control-plane
I1119 02:33:42.531885 315363 kubeadm.go:319]
I1119 02:33:42.531966 315363 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1119 02:33:42.531972 315363 kubeadm.go:319]
I1119 02:33:42.532046 315363 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9q4qi.t8dfm2zfbs2z2sgs \
I1119 02:33:42.532149 315363 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:7672c4bccf136c60db27ba1101c0494bf335a1a16bcd609d07ab95cd78c58a5a
I1119 02:33:42.532161 315363 cni.go:84] Creating CNI manager for ""
I1119 02:33:42.532167 315363 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1119 02:33:42.535194 315363 out.go:179] * Configuring CNI (Container Networking Interface) ...
I1119 02:33:42.536650 315363 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I1119 02:33:42.541710 315363 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
I1119 02:33:42.541734 315363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
I1119 02:33:42.556040 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1119 02:33:42.817018 315363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1119 02:33:42.817147 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:42.817217 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-168452 minikube.k8s.io/updated_at=2025_11_19T02_33_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ebd49efe8b7d44f89fc96e21beffcd64dfee8277 minikube.k8s.io/name=embed-certs-168452 minikube.k8s.io/primary=true
I1119 02:33:42.828812 315363 ops.go:34] apiserver oom_adj: -16
I1119 02:33:42.896633 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:43.396810 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:43.896801 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:44.397677 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
W1119 02:33:46.450455 208368 system_pods.go:55] pod list returned error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
I1119 02:33:46.452233 208368 out.go:203]
W1119 02:33:46.453522 208368 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for system pods: apiserver never returned a pod list
W1119 02:33:46.453544 208368 out.go:285] *
W1119 02:33:46.455831 208368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1119 02:33:46.457044 208368 out.go:203]
W1119 02:33:42.007484 307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
W1119 02:33:44.007813 307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
W1119 02:33:46.008192 307222 node_ready.go:57] node "no-preload-483142" has "Ready":"False" status (will retry)
I1119 02:33:44.897377 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:45.397137 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:45.897616 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:46.397448 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:46.896710 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:47.397632 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:47.897150 315363 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1119 02:33:48.003028 315363 kubeadm.go:1114] duration metric: took 5.18596901s to wait for elevateKubeSystemPrivileges
I1119 02:33:48.003056 315363 kubeadm.go:403] duration metric: took 17.566632128s to StartCluster
I1119 02:33:48.003071 315363 settings.go:142] acquiring lock: {Name:mka23b77a5b0acf39a3c823ca2c708cd59c6f5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:48.003125 315363 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21924-11107/kubeconfig
I1119 02:33:48.005668 315363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21924-11107/kubeconfig: {Name:mka1017ae43351e828a5b28d36aa5ae57c3e40e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1119 02:33:48.005964 315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1119 02:33:48.005984 315363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1119 02:33:48.006098 315363 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1119 02:33:48.006191 315363 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-168452"
I1119 02:33:48.006211 315363 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-168452"
I1119 02:33:48.006209 315363 addons.go:70] Setting default-storageclass=true in profile "embed-certs-168452"
I1119 02:33:48.006218 315363 config.go:182] Loaded profile config "embed-certs-168452": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1119 02:33:48.006231 315363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-168452"
I1119 02:33:48.006249 315363 host.go:66] Checking if "embed-certs-168452" exists ...
I1119 02:33:48.006692 315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
I1119 02:33:48.006819 315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
I1119 02:33:48.007901 315363 out.go:179] * Verifying Kubernetes components...
I1119 02:33:48.009142 315363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1119 02:33:48.032568 315363 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1119 02:33:48.032594 315363 addons.go:239] Setting addon default-storageclass=true in "embed-certs-168452"
I1119 02:33:48.032649 315363 host.go:66] Checking if "embed-certs-168452" exists ...
I1119 02:33:48.033140 315363 cli_runner.go:164] Run: docker container inspect embed-certs-168452 --format={{.State.Status}}
I1119 02:33:48.034177 315363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1119 02:33:48.034248 315363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1119 02:33:48.034332 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:48.063775 315363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1119 02:33:48.063802 315363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1119 02:33:48.063864 315363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-168452
I1119 02:33:48.067763 315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
I1119 02:33:48.088481 315363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33105 SSHKeyPath:/home/jenkins/minikube-integration/21924-11107/.minikube/machines/embed-certs-168452/id_rsa Username:docker}
I1119 02:33:48.118977 315363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.94.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1119 02:33:48.181811 315363 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1119 02:33:48.192106 315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1119 02:33:48.217510 315363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1119 02:33:48.350174 315363 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
I1119 02:33:48.351838 315363 node_ready.go:35] waiting up to 6m0s for node "embed-certs-168452" to be "Ready" ...
I1119 02:33:48.575859 315363 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1119 02:33:48.577031 315363 addons.go:515] duration metric: took 570.934719ms for enable addons: enabled=[storage-provisioner default-storageclass]
I1119 02:33:48.855157 315363 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-168452" context rescaled to 1 replicas
I1119 02:33:47.507132 307222 node_ready.go:49] node "no-preload-483142" is "Ready"
I1119 02:33:47.507166 307222 node_ready.go:38] duration metric: took 14.002781703s for node "no-preload-483142" to be "Ready" ...
I1119 02:33:47.507196 307222 api_server.go:52] waiting for apiserver process to appear ...
I1119 02:33:47.507253 307222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1119 02:33:47.522586 307222 api_server.go:72] duration metric: took 14.39103106s to wait for apiserver process to appear ...
I1119 02:33:47.522619 307222 api_server.go:88] waiting for apiserver healthz status ...
I1119 02:33:47.522641 307222 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1119 02:33:47.526803 307222 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1119 02:33:47.527974 307222 api_server.go:141] control plane version: v1.34.1
I1119 02:33:47.528002 307222 api_server.go:131] duration metric: took 5.376603ms to wait for apiserver health ...
I1119 02:33:47.528022 307222 system_pods.go:43] waiting for kube-system pods to appear ...
I1119 02:33:47.531978 307222 system_pods.go:59] 8 kube-system pods found
I1119 02:33:47.532021 307222 system_pods.go:61] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:47.532030 307222 system_pods.go:61] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
I1119 02:33:47.532039 307222 system_pods.go:61] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
I1119 02:33:47.532046 307222 system_pods.go:61] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
I1119 02:33:47.532053 307222 system_pods.go:61] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
I1119 02:33:47.532059 307222 system_pods.go:61] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
I1119 02:33:47.532066 307222 system_pods.go:61] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
I1119 02:33:47.532078 307222 system_pods.go:61] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:47.532088 307222 system_pods.go:74] duration metric: took 4.058015ms to wait for pod list to return data ...
I1119 02:33:47.532104 307222 default_sa.go:34] waiting for default service account to be created ...
I1119 02:33:47.535565 307222 default_sa.go:45] found service account: "default"
I1119 02:33:47.535586 307222 default_sa.go:55] duration metric: took 3.475549ms for default service account to be created ...
I1119 02:33:47.535596 307222 system_pods.go:116] waiting for k8s-apps to be running ...
I1119 02:33:47.539134 307222 system_pods.go:86] 8 kube-system pods found
I1119 02:33:47.539173 307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:47.539181 307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
I1119 02:33:47.539188 307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
I1119 02:33:47.539192 307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
I1119 02:33:47.539196 307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
I1119 02:33:47.539204 307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
I1119 02:33:47.539210 307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
I1119 02:33:47.539215 307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:47.539249 307222 retry.go:31] will retry after 294.264342ms: missing components: kube-dns
I1119 02:33:47.840195 307222 system_pods.go:86] 8 kube-system pods found
I1119 02:33:47.840235 307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:47.840244 307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
I1119 02:33:47.840253 307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
I1119 02:33:47.840257 307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
I1119 02:33:47.840262 307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
I1119 02:33:47.840267 307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
I1119 02:33:47.840272 307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
I1119 02:33:47.840288 307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:47.840308 307222 retry.go:31] will retry after 249.747879ms: missing components: kube-dns
I1119 02:33:48.097280 307222 system_pods.go:86] 8 kube-system pods found
I1119 02:33:48.097316 307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:48.097322 307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
I1119 02:33:48.097331 307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
I1119 02:33:48.097336 307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
I1119 02:33:48.097342 307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
I1119 02:33:48.097346 307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
I1119 02:33:48.097350 307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
I1119 02:33:48.097356 307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:48.097389 307222 retry.go:31] will retry after 312.943754ms: missing components: kube-dns
I1119 02:33:48.416167 307222 system_pods.go:86] 8 kube-system pods found
I1119 02:33:48.416224 307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:48.416233 307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
I1119 02:33:48.416242 307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
I1119 02:33:48.416249 307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
I1119 02:33:48.416265 307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
I1119 02:33:48.416285 307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
I1119 02:33:48.416290 307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
I1119 02:33:48.416304 307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:48.416338 307222 retry.go:31] will retry after 380.92269ms: missing components: kube-dns
I1119 02:33:48.802673 307222 system_pods.go:86] 8 kube-system pods found
I1119 02:33:48.802712 307222 system_pods.go:89] "coredns-66bc5c9577-zgfk9" [a3d24a51-2fec-4ae7-852e-c65aef957597] Running
I1119 02:33:48.802721 307222 system_pods.go:89] "etcd-no-preload-483142" [7c2b0e3b-9242-4a7b-adbc-49e846f76e61] Running
I1119 02:33:48.802726 307222 system_pods.go:89] "kindnet-6nr7d" [b6bf7df0-8af6-4156-990c-6f70cc159a8c] Running
I1119 02:33:48.802731 307222 system_pods.go:89] "kube-apiserver-no-preload-483142" [1321916d-38ae-4f26-9d9f-0917576d2d6b] Running
I1119 02:33:48.802737 307222 system_pods.go:89] "kube-controller-manager-no-preload-483142" [62d9ccdb-1224-4ba6-a8b6-4270a0043a26] Running
I1119 02:33:48.802742 307222 system_pods.go:89] "kube-proxy-xhrdt" [2ed3d00d-7760-4eed-af73-abf314cf5901] Running
I1119 02:33:48.802755 307222 system_pods.go:89] "kube-scheduler-no-preload-483142" [c58ff0b6-59c3-4677-8770-42bfef0d53a2] Running
I1119 02:33:48.802764 307222 system_pods.go:89] "storage-provisioner" [c66a6926-3a4a-4aa9-b40b-349e1b056683] Running
I1119 02:33:48.802775 307222 system_pods.go:126] duration metric: took 1.26717246s to wait for k8s-apps to be running ...
I1119 02:33:48.802788 307222 system_svc.go:44] waiting for kubelet service to be running ....
I1119 02:33:48.802838 307222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1119 02:33:48.819234 307222 system_svc.go:56] duration metric: took 16.435872ms WaitForService to wait for kubelet
I1119 02:33:48.819260 307222 kubeadm.go:587] duration metric: took 15.68771243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1119 02:33:48.819276 307222 node_conditions.go:102] verifying NodePressure condition ...
I1119 02:33:48.823861 307222 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1119 02:33:48.823901 307222 node_conditions.go:123] node cpu capacity is 8
I1119 02:33:48.823924 307222 node_conditions.go:105] duration metric: took 4.642889ms to run NodePressure ...
I1119 02:33:48.823938 307222 start.go:242] waiting for startup goroutines ...
I1119 02:33:48.823947 307222 start.go:247] waiting for cluster config update ...
I1119 02:33:48.823960 307222 start.go:256] writing updated cluster config ...
I1119 02:33:48.824308 307222 ssh_runner.go:195] Run: rm -f paused
I1119 02:33:48.829946 307222 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1119 02:33:48.834766 307222 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zgfk9" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:48.839922 307222 pod_ready.go:94] pod "coredns-66bc5c9577-zgfk9" is "Ready"
I1119 02:33:48.839950 307222 pod_ready.go:86] duration metric: took 5.154322ms for pod "coredns-66bc5c9577-zgfk9" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:48.842702 307222 pod_ready.go:83] waiting for pod "etcd-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:48.848818 307222 pod_ready.go:94] pod "etcd-no-preload-483142" is "Ready"
I1119 02:33:48.848850 307222 pod_ready.go:86] duration metric: took 6.115348ms for pod "etcd-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:48.851685 307222 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:48.856283 307222 pod_ready.go:94] pod "kube-apiserver-no-preload-483142" is "Ready"
I1119 02:33:48.856303 307222 pod_ready.go:86] duration metric: took 4.595808ms for pod "kube-apiserver-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:48.858418 307222 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:49.235039 307222 pod_ready.go:94] pod "kube-controller-manager-no-preload-483142" is "Ready"
I1119 02:33:49.235070 307222 pod_ready.go:86] duration metric: took 376.631643ms for pod "kube-controller-manager-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:49.435524 307222 pod_ready.go:83] waiting for pod "kube-proxy-xhrdt" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:49.834741 307222 pod_ready.go:94] pod "kube-proxy-xhrdt" is "Ready"
I1119 02:33:49.834767 307222 pod_ready.go:86] duration metric: took 399.219221ms for pod "kube-proxy-xhrdt" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:50.035303 307222 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:50.434632 307222 pod_ready.go:94] pod "kube-scheduler-no-preload-483142" is "Ready"
I1119 02:33:50.434662 307222 pod_ready.go:86] duration metric: took 399.329431ms for pod "kube-scheduler-no-preload-483142" in "kube-system" namespace to be "Ready" or be gone ...
I1119 02:33:50.434673 307222 pod_ready.go:40] duration metric: took 1.604675519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1119 02:33:50.483179 307222 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
I1119 02:33:50.485257 307222 out.go:179] * Done! kubectl is now configured to use "no-preload-483142" cluster and "default" namespace by default
W1119 02:33:50.355270 315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
W1119 02:33:52.857401 315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
W1119 02:33:55.355262 315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
W1119 02:33:57.855402 315363 node_ready.go:57] node "embed-certs-168452" has "Ready":"False" status (will retry)
I1119 02:33:58.855203 315363 node_ready.go:49] node "embed-certs-168452" is "Ready"
I1119 02:33:58.855237 315363 node_ready.go:38] duration metric: took 10.503369895s for node "embed-certs-168452" to be "Ready" ...
I1119 02:33:58.855255 315363 api_server.go:52] waiting for apiserver process to appear ...
I1119 02:33:58.855343 315363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1119 02:33:58.869209 315363 api_server.go:72] duration metric: took 10.863154231s to wait for apiserver process to appear ...
I1119 02:33:58.869250 315363 api_server.go:88] waiting for apiserver healthz status ...
I1119 02:33:58.869274 315363 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
I1119 02:33:58.875569 315363 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
ok
I1119 02:33:58.876575 315363 api_server.go:141] control plane version: v1.34.1
I1119 02:33:58.876617 315363 api_server.go:131] duration metric: took 7.360045ms to wait for apiserver health ...
I1119 02:33:58.876629 315363 system_pods.go:43] waiting for kube-system pods to appear ...
I1119 02:33:58.880702 315363 system_pods.go:59] 8 kube-system pods found
I1119 02:33:58.880740 315363 system_pods.go:61] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:58.880760 315363 system_pods.go:61] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
I1119 02:33:58.880773 315363 system_pods.go:61] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
I1119 02:33:58.880780 315363 system_pods.go:61] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
I1119 02:33:58.880788 315363 system_pods.go:61] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
I1119 02:33:58.880793 315363 system_pods.go:61] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
I1119 02:33:58.880798 315363 system_pods.go:61] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
I1119 02:33:58.880805 315363 system_pods.go:61] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:58.880814 315363 system_pods.go:74] duration metric: took 4.173761ms to wait for pod list to return data ...
I1119 02:33:58.880828 315363 default_sa.go:34] waiting for default service account to be created ...
I1119 02:33:58.888971 315363 default_sa.go:45] found service account: "default"
I1119 02:33:58.888998 315363 default_sa.go:55] duration metric: took 8.162397ms for default service account to be created ...
I1119 02:33:58.889023 315363 system_pods.go:116] waiting for k8s-apps to be running ...
I1119 02:33:58.892650 315363 system_pods.go:86] 8 kube-system pods found
I1119 02:33:58.892685 315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:58.892694 315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
I1119 02:33:58.892703 315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
I1119 02:33:58.892709 315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
I1119 02:33:58.892716 315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
I1119 02:33:58.892721 315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
I1119 02:33:58.892726 315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
I1119 02:33:58.892734 315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:58.892772 315363 retry.go:31] will retry after 264.439801ms: missing components: kube-dns
I1119 02:33:59.162425 315363 system_pods.go:86] 8 kube-system pods found
I1119 02:33:59.162466 315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:59.162474 315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
I1119 02:33:59.162483 315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
I1119 02:33:59.162488 315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
I1119 02:33:59.162494 315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
I1119 02:33:59.162499 315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
I1119 02:33:59.162505 315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
I1119 02:33:59.162512 315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:59.162533 315363 retry.go:31] will retry after 355.424259ms: missing components: kube-dns
I1119 02:33:59.524153 315363 system_pods.go:86] 8 kube-system pods found
I1119 02:33:59.524197 315363 system_pods.go:89] "coredns-66bc5c9577-zjkgg" [5c9ef71f-e4c3-4b8d-9ad3-71e0a56231e3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1119 02:33:59.524212 315363 system_pods.go:89] "etcd-embed-certs-168452" [d0ec7dd4-3fea-4cb9-9409-d7580e3096e5] Running
I1119 02:33:59.524223 315363 system_pods.go:89] "kindnet-rf6v9" [6e29d839-0594-41f7-bfd8-1f9ab66b4c86] Running
I1119 02:33:59.524229 315363 system_pods.go:89] "kube-apiserver-embed-certs-168452" [1a173dec-e248-4772-8884-094a1416f6bc] Running
I1119 02:33:59.524235 315363 system_pods.go:89] "kube-controller-manager-embed-certs-168452" [54a570a5-683f-435f-8ef3-801a384a4e4c] Running
I1119 02:33:59.524241 315363 system_pods.go:89] "kube-proxy-v65n7" [edc341f0-decd-4b30-a13d-a730cb8fc47d] Running
I1119 02:33:59.524255 315363 system_pods.go:89] "kube-scheduler-embed-certs-168452" [0547e424-6b3a-487f-94ba-a3f38ab4d102] Running
I1119 02:33:59.524262 315363 system_pods.go:89] "storage-provisioner" [eebce997-029a-4da2-b6cd-bb0ff195ebbe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1119 02:33:59.524283 315363 retry.go:31] will retry after 458.998162ms: missing components: kube-dns
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
fde84f87a7c77 c80c8dbafe7dd 29 seconds ago Exited kube-controller-manager 7 35551b04d3546 kube-controller-manager-kubernetes-upgrade-896338 kube-system
138444193ad2d c3994bc696102 About a minute ago Exited kube-apiserver 7 e2959907c57f4 kube-apiserver-kubernetes-upgrade-896338 kube-system
f7df69037dad7 5f1f5298c888d 6 minutes ago Running etcd 0 56e9fd844d8d6 etcd-kubernetes-upgrade-896338 kube-system
2fc1c7d64ddfc 7dd6aaa1717ab 6 minutes ago Running kube-scheduler 0 f2a6405d8feb1 kube-scheduler-kubernetes-upgrade-896338 kube-system
==> containerd <==
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.021062382Z" level=info msg="StartContainer for \"138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3\""
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.022118954Z" level=info msg="connecting to shim 138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3" address="unix:///run/containerd/s/e320be9675de49d356d8bea84184053a7dc60a98f39c19e3fba6dc0c23042a72" protocol=ttrpc version=3
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.033847191Z" level=info msg="container event discarded" container=24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9 type=CONTAINER_CREATED_EVENT
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.125229003Z" level=info msg="StartContainer for \"138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3\" returns successfully"
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.135942594Z" level=info msg="container event discarded" container=24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9 type=CONTAINER_STARTED_EVENT
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.172927081Z" level=info msg="received container exit event container_id:\"138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3\" id:\"138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3\" pid:3784 exit_status:1 exited_at:{seconds:1763519579 nanos:172596614}"
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.239267865Z" level=info msg="container event discarded" container=24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9 type=CONTAINER_STOPPED_EVENT
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.342715266Z" level=info msg="container event discarded" container=622ff9a93a32895a33023cb0085923493b05558510186b9e15b460a8cfe29a06 type=CONTAINER_DELETED_EVENT
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.986207240Z" level=info msg="RemoveContainer for \"06bfe3a0696dbfa4a3c0e0bebb72ad9841dbe9e784377890e1d9773d37735357\""
Nov 19 02:32:59 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:32:59.991223390Z" level=info msg="RemoveContainer for \"06bfe3a0696dbfa4a3c0e0bebb72ad9841dbe9e784377890e1d9773d37735357\" returns successfully"
Nov 19 02:33:11 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:11.047409956Z" level=info msg="container event discarded" container=53296e4f2221b9bbfa0fd6e3750b279e6f5ff82e99f25639336cfa9d9c4fa7b1 type=CONTAINER_CREATED_EVENT
Nov 19 02:33:11 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:11.237862984Z" level=info msg="container event discarded" container=53296e4f2221b9bbfa0fd6e3750b279e6f5ff82e99f25639336cfa9d9c4fa7b1 type=CONTAINER_STARTED_EVENT
Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.005469774Z" level=info msg="CreateContainer within sandbox \"35551b04d3546ac17b04f26ca16fef2308a03fbcbdcf783f23fe3c87100dabef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:7,}"
Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.012626260Z" level=info msg="Container fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342: CDI devices from CRI Config.CDIDevices: []"
Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.021189286Z" level=info msg="CreateContainer within sandbox \"35551b04d3546ac17b04f26ca16fef2308a03fbcbdcf783f23fe3c87100dabef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:7,} returns container id \"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\""
Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.021760601Z" level=info msg="StartContainer for \"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\""
Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.024979395Z" level=info msg="connecting to shim fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342" address="unix:///run/containerd/s/51870f9e62ca288f0c9b6fccb65cd55df1acecff3786dd06b1beeaee71a30efa" protocol=ttrpc version=3
Nov 19 02:33:32 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:32.128897956Z" level=info msg="StartContainer for \"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\" returns successfully"
Nov 19 02:33:44 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:44.291399293Z" level=info msg="received container exit event container_id:\"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\" id:\"fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342\" pid:3833 exit_status:1 exited_at:{seconds:1763519624 nanos:291133835}"
Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.030006716Z" level=info msg="container event discarded" container=324645156bf7a8fe278ec737183ebf2e2f74cff3d9677b348e2be20e9f44205e type=CONTAINER_CREATED_EVENT
Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.094231812Z" level=info msg="RemoveContainer for \"1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03\""
Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.098658834Z" level=info msg="RemoveContainer for \"1ba0c8fe18b0c917482c746cfef00696629bcc9748d8c3e10ced55d71c2c1a03\" returns successfully"
Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.121149610Z" level=info msg="container event discarded" container=324645156bf7a8fe278ec737183ebf2e2f74cff3d9677b348e2be20e9f44205e type=CONTAINER_STARTED_EVENT
Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.200566583Z" level=info msg="container event discarded" container=324645156bf7a8fe278ec737183ebf2e2f74cff3d9677b348e2be20e9f44205e type=CONTAINER_STOPPED_EVENT
Nov 19 02:33:45 kubernetes-upgrade-896338 containerd[2107]: time="2025-11-19T02:33:45.448296380Z" level=info msg="container event discarded" container=24679a3f4e0ae74c83eaf8c278efdd96c3afb2c5688e0f358470b78a7f7a38e9 type=CONTAINER_DELETED_EVENT
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
==> dmesg <==
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
[Nov19 02:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 74 0c d7 a6 53 08 06
[ +0.000339] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 cb 73 dc 0d 8a 08 06
[ +28.680399] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 e9 7c 92 36 13 08 06
[ +0.000001] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
[Nov19 02:32] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
[ +4.552839] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
[ +11.086189] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 76 d1 26 7f 3d 08 06
[ +0.000377] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 7c 46 16 38 40 08 06
[ +9.270754] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff a2 49 fd 34 51 3b 08 06
[ +0.000702] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 88 80 5a bc 94 08 06
[ +23.593864] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 86 43 5f 18 4c 08 06
[ +0.000495] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a f6 63 65 24 bc 08 06
==> etcd [f7df69037dad73c346bafade9f17ccda547baf86f109ee96ebf9ec5074fdc32c] <==
{"level":"info","ts":"2025-11-19T02:27:13.340874Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
{"level":"info","ts":"2025-11-19T02:27:13.340937Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
{"level":"info","ts":"2025-11-19T02:27:13.341019Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
{"level":"info","ts":"2025-11-19T02:27:13.341043Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2025-11-19T02:27:13.341066Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 4"}
{"level":"info","ts":"2025-11-19T02:27:13.380650Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
{"level":"info","ts":"2025-11-19T02:27:13.380706Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
{"level":"info","ts":"2025-11-19T02:27:13.380747Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 4"}
{"level":"info","ts":"2025-11-19T02:27:13.380766Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
{"level":"info","ts":"2025-11-19T02:27:13.440272Z","caller":"etcdserver/server.go:1804","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:kubernetes-upgrade-896338 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
{"level":"info","ts":"2025-11-19T02:27:13.440340Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-11-19T02:27:13.440332Z","caller":"etcdserver/server.go:2409","msg":"updating cluster version using v3 API","from":"3.5","to":"3.6"}
{"level":"info","ts":"2025-11-19T02:27:13.440287Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-11-19T02:27:13.440492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-11-19T02:27:13.440518Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-11-19T02:27:13.441695Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"warn","ts":"2025-11-19T02:27:13.441701Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
{"level":"info","ts":"2025-11-19T02:27:13.442214Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-11-19T02:27:13.445812Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
{"level":"info","ts":"2025-11-19T02:27:13.446133Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-11-19T02:27:13.504146Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.5","to":"3.6"}
{"level":"info","ts":"2025-11-19T02:27:13.504730Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
{"level":"info","ts":"2025-11-19T02:27:13.504846Z","caller":"etcdserver/server.go:2424","msg":"cluster version is updated","cluster-version":"3.6"}
{"level":"info","ts":"2025-11-19T02:27:13.504929Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
{"level":"info","ts":"2025-11-19T02:27:13.505097Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
==> kernel <==
02:35:01 up 1:17, 0 user, load average: 3.86, 3.82, 2.65
Linux kubernetes-upgrade-896338 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kube-apiserver [138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3] <==
I1119 02:32:59.166043 1 options.go:263] external host was not specified, using 192.168.85.2
I1119 02:32:59.168288 1 server.go:150] Version: v1.34.1
I1119 02:32:59.168338 1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
E1119 02:32:59.168696 1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8443: listen tcp 0.0.0.0:8443: bind: address already in use"
==> kube-controller-manager [fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342] <==
I1119 02:33:32.602406 1 serving.go:386] Generated self-signed cert in-memory
I1119 02:33:34.269353 1 controllermanager.go:191] "Starting" version="v1.34.1"
I1119 02:33:34.269400 1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1119 02:33:34.272111 1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
I1119 02:33:34.272192 1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1119 02:33:34.272452 1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1119 02:33:34.272666 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
E1119 02:33:44.286563 1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[-]log failed: reason withheld\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
==> kube-scheduler [2fc1c7d64ddfc8cfae76fafb1d2818e8e60acd2e091805d791cfdd40dbc01017] <==
I1119 02:27:11.053882 1 serving.go:386] Generated self-signed cert in-memory
W1119 02:28:11.686321 1 authentication.go:397] Error looking up in-cluster authentication configuration: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)
W1119 02:28:11.686355 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1119 02:28:11.686383 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1119 02:28:11.712530 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
I1119 02:28:11.712558 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1119 02:28:11.716268 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1119 02:28:11.716607 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1119 02:28:11.717079 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1119 02:28:11.717269 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1119 02:28:11.816783 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E1119 02:28:45.821489 1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.18794773cf64041d kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-11-19 02:28:11.818492899 +0000 UTC m=+61.524664710,Series:nil,ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-896338,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner f6d9e6ac-27ed-4a02-94ee-92ca173894d7 v1 428 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,Deprec
atedCount:0,}"
E1119 02:28:45.832113 1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
E1119 02:33:45.828418 1 pod_status_patch.go:111] "Failed to patch pod status" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/storage-provisioner"
E1119 02:33:45.828690 1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{storage-provisioner.18794773cf64041d kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},EventTime:2025-11-19 02:28:11.818492899 +0000 UTC m=+61.524664710,Series:&EventSeries{Count:2,LastObservedTime:2025-11-19 02:33:11.82615292 +0000 UTC m=+361.532324724,},ReportingController:default-scheduler,ReportingInstance:default-scheduler-kubernetes-upgrade-896338,Action:Scheduling,Reason:FailedScheduling,Regarding:{Pod kube-system storage-provisioner f6d9e6ac-27ed-4a02-94ee-92ca173894d7 v1 428 },Related:nil,Note:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.,Type:Warning,DeprecatedSource:{ },DeprecatedFirstTimestam
p:0001-01-01 00:00:00 +0000 UTC,DeprecatedLastTimestamp:0001-01-01 00:00:00 +0000 UTC,DeprecatedCount:0,}"
==> kubelet <==
Nov 19 02:34:20 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:20.003534 1153 scope.go:117] "RemoveContainer" containerID="fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342"
Nov 19 02:34:20 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:20.003737 1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-896338" podUID="e32c4b2970efa8ef72e4afc8aa2f7038"
Nov 19 02:34:23 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:23.003992 1153 scope.go:117] "RemoveContainer" containerID="138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3"
Nov 19 02:34:23 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:23.004169 1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-896338_kube-system(1d4af482de8ef1996b35bfa6adfca717)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-896338" podUID="1d4af482de8ef1996b35bfa6adfca717"
Nov 19 02:34:25 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:25.129563 1153 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-kubernetes-upgrade-896338.1879475d20211559 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-kubernetes-upgrade-896338,UID:1d4af482de8ef1996b35bfa6adfca717,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-896338,},FirstTimestamp:2025-11-19 02:26:34.388829529 +0000 UTC m=+20.466301320,LastTimestamp:2025-11-19 02:26:52.38598035 +0000 UTC m=+38.463452140,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Repor
tingController:kubelet,ReportingInstance:kubernetes-upgrade-896338,}"
Nov 19 02:34:29 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:29.005524 1153 mirror_client.go:139] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
Nov 19 02:34:31 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:31.003625 1153 scope.go:117] "RemoveContainer" containerID="fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342"
Nov 19 02:34:31 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:31.003794 1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-896338" podUID="e32c4b2970efa8ef72e4afc8aa2f7038"
Nov 19 02:34:31 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:31.765893 1153 kubelet_node_status.go:107] "Unable to register node with API server" err="Timeout: request did not complete within requested timeout - context deadline exceeded" node="kubernetes-upgrade-896338"
Nov 19 02:34:32 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:32.639539 1153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-896338?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
Nov 19 02:34:34 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:34.004504 1153 scope.go:117] "RemoveContainer" containerID="138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3"
Nov 19 02:34:34 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:34.004719 1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-896338_kube-system(1d4af482de8ef1996b35bfa6adfca717)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-896338" podUID="1d4af482de8ef1996b35bfa6adfca717"
Nov 19 02:34:38 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:38.768108 1153 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-896338"
Nov 19 02:34:45 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:45.003261 1153 scope.go:117] "RemoveContainer" containerID="fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342"
Nov 19 02:34:45 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:45.003510 1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-896338" podUID="e32c4b2970efa8ef72e4afc8aa2f7038"
Nov 19 02:34:46 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:46.004331 1153 scope.go:117] "RemoveContainer" containerID="138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3"
Nov 19 02:34:46 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:46.004614 1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-896338_kube-system(1d4af482de8ef1996b35bfa6adfca717)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-896338" podUID="1d4af482de8ef1996b35bfa6adfca717"
Nov 19 02:34:49 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:49.641074 1153 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-896338?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s"
Nov 19 02:34:57 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:34:57.003860 1153 scope.go:117] "RemoveContainer" containerID="fde84f87a7c77a310029a5d9dc290d6d8e6e24ee4225e963874d51891692a342"
Nov 19 02:34:57 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:57.004017 1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-896338" podUID="e32c4b2970efa8ef72e4afc8aa2f7038"
Nov 19 02:34:59 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:34:59.131850 1153 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-kubernetes-upgrade-896338.187947608e8d132c kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-kubernetes-upgrade-896338,UID:e32c4b2970efa8ef72e4afc8aa2f7038,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-kubernetes-upgrade-896338_kube-system(e32c4b2970efa8ef72e4afc8aa2f7038),Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-896338,},FirstTimestamp:2025-11-19 02:26:49.126302508 +0000 UTC m=+35.203774294,LastTimestamp:2025-11-19 02:26:54.7089
21663 +0000 UTC m=+40.786393456,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-896338,}"
Nov 19 02:35:00 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:35:00.003928 1153 scope.go:117] "RemoveContainer" containerID="138444193ad2dbb17169b6928e98492dc3f50c4037c63e586b06500015e8e7d3"
Nov 19 02:35:00 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:35:00.004146 1153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-896338_kube-system(1d4af482de8ef1996b35bfa6adfca717)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-896338" podUID="1d4af482de8ef1996b35bfa6adfca717"
Nov 19 02:35:00 kubernetes-upgrade-896338 kubelet[1153]: E1119 02:35:00.034585 1153 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-scheduler-kubernetes-upgrade-896338)" podUID="6e6aa192bc499077f7f5955d155982e2" pod="kube-system/kube-scheduler-kubernetes-upgrade-896338"
Nov 19 02:35:01 kubernetes-upgrade-896338 kubelet[1153]: I1119 02:35:01.003809 1153 kubelet.go:3202] "Trying to delete pod" pod="kube-system/etcd-kubernetes-upgrade-896338" podUID="eea50bd2-467d-40e3-ac23-12aa3fd98404"
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-896338 -n kubernetes-upgrade-896338
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-896338 -n kubernetes-upgrade-896338: exit status 2 (13.841005364s)
-- stdout --
Error
-- /stdout --
** stderr **
E1119 02:35:15.959500 335426 status.go:466] Error apiserver status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[-]log failed: reason withheld
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
** /stderr **
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-896338" apiserver is not running, skipping kubectl commands (state="Error")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-896338" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p kubernetes-upgrade-896338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-896338: (2.879662899s)
--- FAIL: TestKubernetesUpgrade (595.80s)