=== RUN TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run: out/minikube-linux-amd64 start -p kubenet-895879 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker --container-runtime=docker
I1017 19:56:53.356329 17802 config.go:182] Loaded profile config "flannel-895879": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubenet-895879 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker --container-runtime=docker: exit status 80 (34.840045609s)
-- stdout --
* [kubenet-895879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21664
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21664-14234/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-14234/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "kubenet-895879" primary control-plane node in "kubenet-895879" cluster
* Pulling base image v0.0.48-1760609789-21757 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
** stderr **
I1017 19:56:53.098639 427529 out.go:360] Setting OutFile to fd 1 ...
I1017 19:56:53.098820 427529 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:56:53.098835 427529 out.go:374] Setting ErrFile to fd 2...
I1017 19:56:53.098842 427529 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:56:53.099215 427529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-14234/.minikube/bin
I1017 19:56:53.099946 427529 out.go:368] Setting JSON to false
I1017 19:56:53.101845 427529 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5956,"bootTime":1760725057,"procs":453,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1017 19:56:53.101954 427529 start.go:141] virtualization: kvm guest
I1017 19:56:53.104319 427529 out.go:179] * [kubenet-895879] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1017 19:56:53.105982 427529 out.go:179] - MINIKUBE_LOCATION=21664
I1017 19:56:53.105992 427529 notify.go:220] Checking for updates...
I1017 19:56:53.108681 427529 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1017 19:56:53.110077 427529 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21664-14234/kubeconfig
I1017 19:56:53.111414 427529 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-14234/.minikube
I1017 19:56:53.112655 427529 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1017 19:56:53.113767 427529 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1017 19:56:53.115406 427529 config.go:182] Loaded profile config "bridge-895879": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1017 19:56:53.115549 427529 config.go:182] Loaded profile config "enable-default-cni-895879": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1017 19:56:53.115668 427529 config.go:182] Loaded profile config "flannel-895879": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1017 19:56:53.115831 427529 driver.go:421] Setting default libvirt URI to qemu:///system
I1017 19:56:53.157834 427529 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
I1017 19:56:53.158017 427529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1017 19:56:53.234038 427529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-17 19:56:53.221845186 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1017 19:56:53.234188 427529 docker.go:318] overlay module found
I1017 19:56:53.238412 427529 out.go:179] * Using the docker driver based on user configuration
I1017 19:56:53.239756 427529 start.go:305] selected driver: docker
I1017 19:56:53.239774 427529 start.go:925] validating driver "docker" against <nil>
I1017 19:56:53.239787 427529 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1017 19:56:53.240334 427529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1017 19:56:53.307341 427529 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-17 19:56:53.29528964 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1017 19:56:53.307487 427529 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1017 19:56:53.307701 427529 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1017 19:56:53.309557 427529 out.go:179] * Using Docker driver with root privileges
I1017 19:56:53.311214 427529 cni.go:80] network plugin configured as "kubenet", returning disabled
I1017 19:56:53.311342 427529 start.go:349] cluster config:
{Name:kubenet-895879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubenet-895879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
I1017 19:56:53.312810 427529 out.go:179] * Starting "kubenet-895879" primary control-plane node in "kubenet-895879" cluster
I1017 19:56:53.314060 427529 cache.go:123] Beginning downloading kic base image for docker with docker
I1017 19:56:53.315370 427529 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
I1017 19:56:53.316399 427529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1017 19:56:53.316449 427529 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-14234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
I1017 19:56:53.316457 427529 cache.go:58] Caching tarball of preloaded images
I1017 19:56:53.316529 427529 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
I1017 19:56:53.316541 427529 preload.go:233] Found /home/jenkins/minikube-integration/21664-14234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1017 19:56:53.316551 427529 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
I1017 19:56:53.316641 427529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/config.json ...
I1017 19:56:53.316659 427529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/config.json: {Name:mk83b2b67b4598a10cd9f48bc904a8b9b6f4f59e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:56:53.342308 427529 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
I1017 19:56:53.342334 427529 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
I1017 19:56:53.342355 427529 cache.go:232] Successfully downloaded all kic artifacts
I1017 19:56:53.342389 427529 start.go:360] acquireMachinesLock for kubenet-895879: {Name:mk7776f930d9ac31da59cdd791240876593505bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1017 19:56:53.342524 427529 start.go:364] duration metric: took 112.802µs to acquireMachinesLock for "kubenet-895879"
I1017 19:56:53.342553 427529 start.go:93] Provisioning new machine with config: &{Name:kubenet-895879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubenet-895879 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1017 19:56:53.342663 427529 start.go:125] createHost starting for "" (driver="docker")
I1017 19:56:53.344588 427529 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
I1017 19:56:53.344870 427529 start.go:159] libmachine.API.Create for "kubenet-895879" (driver="docker")
I1017 19:56:53.344912 427529 client.go:168] LocalClient.Create starting
I1017 19:56:53.345037 427529 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca.pem
I1017 19:56:53.345081 427529 main.go:141] libmachine: Decoding PEM data...
I1017 19:56:53.345101 427529 main.go:141] libmachine: Parsing certificate...
I1017 19:56:53.345189 427529 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-14234/.minikube/certs/cert.pem
I1017 19:56:53.345219 427529 main.go:141] libmachine: Decoding PEM data...
I1017 19:56:53.345235 427529 main.go:141] libmachine: Parsing certificate...
I1017 19:56:53.345733 427529 cli_runner.go:164] Run: docker network inspect kubenet-895879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1017 19:56:53.366996 427529 cli_runner.go:211] docker network inspect kubenet-895879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1017 19:56:53.367067 427529 network_create.go:284] running [docker network inspect kubenet-895879] to gather additional debugging logs...
I1017 19:56:53.367092 427529 cli_runner.go:164] Run: docker network inspect kubenet-895879
W1017 19:56:53.391333 427529 cli_runner.go:211] docker network inspect kubenet-895879 returned with exit code 1
I1017 19:56:53.391364 427529 network_create.go:287] error running [docker network inspect kubenet-895879]: docker network inspect kubenet-895879: exit status 1
stdout:
[]
stderr:
Error response from daemon: network kubenet-895879 not found
I1017 19:56:53.391385 427529 network_create.go:289] output of [docker network inspect kubenet-895879]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network kubenet-895879 not found
** /stderr **
I1017 19:56:53.391466 427529 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1017 19:56:53.415076 427529 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-32ced0ec4317 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:15:43:ca:92:2b} reservation:<nil>}
I1017 19:56:53.415894 427529 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-88189a478f87 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:7d:83:b9:99:24} reservation:<nil>}
I1017 19:56:53.416816 427529 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9b9f959748d0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:94:2b:4f:72:72} reservation:<nil>}
I1017 19:56:53.417465 427529 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8bb85b29c908 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:b8:34:7f:e1:80} reservation:<nil>}
I1017 19:56:53.418268 427529 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00207a420}
I1017 19:56:53.418336 427529 network_create.go:124] attempt to create docker network kubenet-895879 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1017 19:56:53.418388 427529 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-895879 kubenet-895879
I1017 19:56:53.509379 427529 network_create.go:108] docker network kubenet-895879 192.168.85.0/24 created
I1017 19:56:53.509427 427529 kic.go:121] calculated static IP "192.168.85.2" for the "kubenet-895879" container
I1017 19:56:53.509503 427529 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1017 19:56:53.536767 427529 cli_runner.go:164] Run: docker volume create kubenet-895879 --label name.minikube.sigs.k8s.io=kubenet-895879 --label created_by.minikube.sigs.k8s.io=true
I1017 19:56:53.560516 427529 oci.go:103] Successfully created a docker volume kubenet-895879
I1017 19:56:53.560602 427529 cli_runner.go:164] Run: docker run --rm --name kubenet-895879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-895879 --entrypoint /usr/bin/test -v kubenet-895879:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
I1017 19:56:54.023768 427529 oci.go:107] Successfully prepared a docker volume kubenet-895879
I1017 19:56:54.023830 427529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1017 19:56:54.023852 427529 kic.go:194] Starting extracting preloaded images to volume ...
I1017 19:56:54.023960 427529 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-14234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-895879:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
I1017 19:56:58.249027 427529 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-14234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-895879:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.225020195s)
I1017 19:56:58.249112 427529 kic.go:203] duration metric: took 4.225254996s to extract preloaded images to volume ...
W1017 19:56:58.249288 427529 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W1017 19:56:58.249342 427529 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I1017 19:56:58.249387 427529 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1017 19:56:58.325673 427529 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-895879 --name kubenet-895879 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-895879 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-895879 --network kubenet-895879 --ip 192.168.85.2 --volume kubenet-895879:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
I1017 19:56:58.690972 427529 cli_runner.go:164] Run: docker container inspect kubenet-895879 --format={{.State.Running}}
I1017 19:56:58.722625 427529 cli_runner.go:164] Run: docker container inspect kubenet-895879 --format={{.State.Status}}
I1017 19:56:58.757647 427529 cli_runner.go:164] Run: docker exec kubenet-895879 stat /var/lib/dpkg/alternatives/iptables
I1017 19:56:58.813806 427529 oci.go:144] the created container "kubenet-895879" has a running status.
I1017 19:56:58.813911 427529 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-14234/.minikube/machines/kubenet-895879/id_rsa...
I1017 19:56:59.023583 427529 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-14234/.minikube/machines/kubenet-895879/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1017 19:56:59.062773 427529 cli_runner.go:164] Run: docker container inspect kubenet-895879 --format={{.State.Status}}
I1017 19:56:59.087959 427529 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1017 19:56:59.087979 427529 kic_runner.go:114] Args: [docker exec --privileged kubenet-895879 chown docker:docker /home/docker/.ssh/authorized_keys]
I1017 19:56:59.135971 427529 cli_runner.go:164] Run: docker container inspect kubenet-895879 --format={{.State.Status}}
I1017 19:56:59.158641 427529 machine.go:93] provisionDockerMachine start ...
I1017 19:56:59.158757 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:56:59.181187 427529 main.go:141] libmachine: Using SSH client type: native
I1017 19:56:59.181447 427529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1017 19:56:59.181470 427529 main.go:141] libmachine: About to run SSH command:
hostname
I1017 19:56:59.319814 427529 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-895879
I1017 19:56:59.319849 427529 ubuntu.go:182] provisioning hostname "kubenet-895879"
I1017 19:56:59.319923 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:56:59.340401 427529 main.go:141] libmachine: Using SSH client type: native
I1017 19:56:59.340659 427529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1017 19:56:59.340674 427529 main.go:141] libmachine: About to run SSH command:
sudo hostname kubenet-895879 && echo "kubenet-895879" | sudo tee /etc/hostname
I1017 19:56:59.488694 427529 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-895879
I1017 19:56:59.488777 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:56:59.508499 427529 main.go:141] libmachine: Using SSH client type: native
I1017 19:56:59.508751 427529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1017 19:56:59.508772 427529 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\skubenet-895879' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-895879/g' /etc/hosts;
else
echo '127.0.1.1 kubenet-895879' | sudo tee -a /etc/hosts;
fi
fi
I1017 19:56:59.646998 427529 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1017 19:56:59.647031 427529 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-14234/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-14234/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-14234/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-14234/.minikube}
I1017 19:56:59.647058 427529 ubuntu.go:190] setting up certificates
I1017 19:56:59.647069 427529 provision.go:84] configureAuth start
I1017 19:56:59.647134 427529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-895879
I1017 19:56:59.671468 427529 provision.go:143] copyHostCerts
I1017 19:56:59.671552 427529 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-14234/.minikube/ca.pem, removing ...
I1017 19:56:59.671570 427529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-14234/.minikube/ca.pem
I1017 19:56:59.671672 427529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-14234/.minikube/ca.pem (1078 bytes)
I1017 19:56:59.671805 427529 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-14234/.minikube/cert.pem, removing ...
I1017 19:56:59.671825 427529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-14234/.minikube/cert.pem
I1017 19:56:59.671871 427529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-14234/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-14234/.minikube/cert.pem (1123 bytes)
I1017 19:56:59.671972 427529 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-14234/.minikube/key.pem, removing ...
I1017 19:56:59.671983 427529 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-14234/.minikube/key.pem
I1017 19:56:59.672023 427529 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-14234/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-14234/.minikube/key.pem (1679 bytes)
I1017 19:56:59.672120 427529 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-14234/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca-key.pem org=jenkins.kubenet-895879 san=[127.0.0.1 192.168.85.2 kubenet-895879 localhost minikube]
I1017 19:57:00.176807 427529 provision.go:177] copyRemoteCerts
I1017 19:57:00.176872 427529 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1017 19:57:00.176904 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:00.198558 427529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/machines/kubenet-895879/id_rsa Username:docker}
I1017 19:57:00.299072 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1017 19:57:00.325559 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I1017 19:57:00.347483 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1017 19:57:00.368009 427529 provision.go:87] duration metric: took 720.922748ms to configureAuth
I1017 19:57:00.368042 427529 ubuntu.go:206] setting minikube options for container-runtime
I1017 19:57:00.368253 427529 config.go:182] Loaded profile config "kubenet-895879": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1017 19:57:00.368366 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:00.390746 427529 main.go:141] libmachine: Using SSH client type: native
I1017 19:57:00.390999 427529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1017 19:57:00.391017 427529 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1017 19:57:00.551652 427529 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I1017 19:57:00.551680 427529 ubuntu.go:71] root file system type: overlay
I1017 19:57:00.551801 427529 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1017 19:57:00.551881 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:00.577178 427529 main.go:141] libmachine: Using SSH client type: native
I1017 19:57:00.577515 427529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1017 19:57:00.577593 427529 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1017 19:57:00.741880 427529 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1017 19:57:00.741967 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:00.766794 427529 main.go:141] libmachine: Using SSH client type: native
I1017 19:57:00.767104 427529 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1017 19:57:00.767136 427529 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1017 19:57:02.705543 427529 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-10-08 12:15:50.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-10-17 19:57:00.738654321 +0000
@@ -9,23 +9,34 @@
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
Restart=always
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
+
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1017 19:57:02.705578 427529 machine.go:96] duration metric: took 3.546907994s to provisionDockerMachine
I1017 19:57:02.705589 427529 client.go:171] duration metric: took 9.360667233s to LocalClient.Create
I1017 19:57:02.705607 427529 start.go:167] duration metric: took 9.360740808s to libmachine.API.Create "kubenet-895879"
I1017 19:57:02.705616 427529 start.go:293] postStartSetup for "kubenet-895879" (driver="docker")
I1017 19:57:02.705631 427529 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1017 19:57:02.705697 427529 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1017 19:57:02.705742 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:02.727059 427529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/machines/kubenet-895879/id_rsa Username:docker}
I1017 19:57:02.829925 427529 ssh_runner.go:195] Run: cat /etc/os-release
I1017 19:57:02.834905 427529 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1017 19:57:02.834959 427529 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1017 19:57:02.834971 427529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-14234/.minikube/addons for local assets ...
I1017 19:57:02.835030 427529 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-14234/.minikube/files for local assets ...
I1017 19:57:02.835108 427529 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-14234/.minikube/files/etc/ssl/certs/178022.pem -> 178022.pem in /etc/ssl/certs
I1017 19:57:02.835200 427529 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1017 19:57:02.846490 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/files/etc/ssl/certs/178022.pem --> /etc/ssl/certs/178022.pem (1708 bytes)
I1017 19:57:02.872309 427529 start.go:296] duration metric: took 166.639036ms for postStartSetup
I1017 19:57:02.872761 427529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-895879
I1017 19:57:02.893686 427529 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/config.json ...
I1017 19:57:02.894013 427529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1017 19:57:02.894080 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:02.918095 427529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/machines/kubenet-895879/id_rsa Username:docker}
I1017 19:57:03.013613 427529 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1017 19:57:03.019383 427529 start.go:128] duration metric: took 9.676703622s to createHost
I1017 19:57:03.019413 427529 start.go:83] releasing machines lock for "kubenet-895879", held for 9.676875157s
I1017 19:57:03.019488 427529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-895879
I1017 19:57:03.039154 427529 ssh_runner.go:195] Run: cat /version.json
I1017 19:57:03.039179 427529 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1017 19:57:03.039217 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:03.039250 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:03.060420 427529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/machines/kubenet-895879/id_rsa Username:docker}
I1017 19:57:03.060983 427529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/machines/kubenet-895879/id_rsa Username:docker}
I1017 19:57:03.222233 427529 ssh_runner.go:195] Run: systemctl --version
I1017 19:57:03.229323 427529 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1017 19:57:03.235133 427529 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1017 19:57:03.235212 427529 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1017 19:57:03.264404 427529 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1017 19:57:03.264433 427529 start.go:495] detecting cgroup driver to use...
I1017 19:57:03.264470 427529 detect.go:190] detected "systemd" cgroup driver on host os
I1017 19:57:03.264629 427529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1017 19:57:03.280586 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1017 19:57:03.295362 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1017 19:57:03.305720 427529 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
I1017 19:57:03.305781 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
I1017 19:57:03.316046 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1017 19:57:03.326196 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1017 19:57:03.335834 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1017 19:57:03.345797 427529 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1017 19:57:03.355208 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1017 19:57:03.365797 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1017 19:57:03.375551 427529 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1017 19:57:03.386034 427529 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1017 19:57:03.395666 427529 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1017 19:57:03.404093 427529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1017 19:57:03.491867 427529 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1017 19:57:03.588461 427529 start.go:495] detecting cgroup driver to use...
I1017 19:57:03.588516 427529 detect.go:190] detected "systemd" cgroup driver on host os
I1017 19:57:03.588568 427529 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1017 19:57:03.603679 427529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1017 19:57:03.618962 427529 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1017 19:57:03.639445 427529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1017 19:57:03.654553 427529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1017 19:57:03.669056 427529 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1017 19:57:03.685051 427529 ssh_runner.go:195] Run: which cri-dockerd
I1017 19:57:03.689579 427529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1017 19:57:03.700898 427529 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
I1017 19:57:03.716701 427529 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1017 19:57:03.821415 427529 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1017 19:57:03.920627 427529 docker.go:575] configuring docker to use "systemd" as cgroup driver...
I1017 19:57:03.920761 427529 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
I1017 19:57:03.935199 427529 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1017 19:57:03.949498 427529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1017 19:57:04.042248 427529 ssh_runner.go:195] Run: sudo systemctl restart docker
I1017 19:57:06.503843 427529 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.461553928s)
I1017 19:57:06.503924 427529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1017 19:57:06.521159 427529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1017 19:57:06.537223 427529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1017 19:57:06.553320 427529 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1017 19:57:06.679248 427529 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1017 19:57:06.789818 427529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1017 19:57:06.894319 427529 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1017 19:57:06.918951 427529 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1017 19:57:06.932903 427529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1017 19:57:07.029982 427529 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1017 19:57:07.115418 427529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1017 19:57:07.130522 427529 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1017 19:57:07.130685 427529 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1017 19:57:07.135235 427529 start.go:563] Will wait 60s for crictl version
I1017 19:57:07.135352 427529 ssh_runner.go:195] Run: which crictl
I1017 19:57:07.139632 427529 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1017 19:57:07.167907 427529 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.5.1
RuntimeApiVersion: v1
I1017 19:57:07.167977 427529 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1017 19:57:07.196455 427529 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1017 19:57:07.226294 427529 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
I1017 19:57:07.226388 427529 cli_runner.go:164] Run: docker network inspect kubenet-895879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1017 19:57:07.246657 427529 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1017 19:57:07.251420 427529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1017 19:57:07.262978 427529 kubeadm.go:883] updating cluster {Name:kubenet-895879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubenet-895879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1017 19:57:07.263121 427529 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1017 19:57:07.263190 427529 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1017 19:57:07.285197 427529 docker.go:691] Got preloaded images: -- stdout --
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/pause:3.10.1
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1017 19:57:07.285223 427529 docker.go:621] Images already preloaded, skipping extraction
I1017 19:57:07.285317 427529 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1017 19:57:07.307046 427529 docker.go:691] Got preloaded images: -- stdout --
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/pause:3.10.1
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1017 19:57:07.307070 427529 cache_images.go:85] Images are preloaded, skipping loading
I1017 19:57:07.307082 427529 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 docker true true} ...
I1017 19:57:07.307206 427529 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-895879 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --pod-cidr=10.244.0.0/16
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:kubenet-895879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1017 19:57:07.307271 427529 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1017 19:57:07.368713 427529 cni.go:80] network plugin configured as "kubenet", returning disabled
I1017 19:57:07.368742 427529 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1017 19:57:07.368765 427529 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-895879 NodeName:kubenet-895879 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1017 19:57:07.368902 427529 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "kubenet-895879"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1017 19:57:07.368967 427529 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1017 19:57:07.378012 427529 binaries.go:44] Found k8s binaries, skipping transfer
I1017 19:57:07.378076 427529 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1017 19:57:07.387568 427529 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
I1017 19:57:07.402928 427529 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1017 19:57:07.418212 427529 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
I1017 19:57:07.432586 427529 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1017 19:57:07.437098 427529 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1017 19:57:07.448605 427529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1017 19:57:07.545397 427529 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1017 19:57:07.571185 427529 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879 for IP: 192.168.85.2
I1017 19:57:07.571235 427529 certs.go:195] generating shared ca certs ...
I1017 19:57:07.571255 427529 certs.go:227] acquiring lock for ca certs: {Name:mka62e42b7f9ddc43267a6f7cff35b2e634bc87c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:07.571493 427529 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-14234/.minikube/ca.key
I1017 19:57:07.571555 427529 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-14234/.minikube/proxy-client-ca.key
I1017 19:57:07.571568 427529 certs.go:257] generating profile certs ...
I1017 19:57:07.571633 427529 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/client.key
I1017 19:57:07.571658 427529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/client.crt with IP's: []
I1017 19:57:07.728027 427529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/client.crt ...
I1017 19:57:07.728052 427529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/client.crt: {Name:mk5d6c312359de51b92bd66758c665a8874e8dbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:07.728241 427529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/client.key ...
I1017 19:57:07.728264 427529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/client.key: {Name:mkc0d0ea13307c1e2b289201ba237ce3961cd664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:07.728388 427529 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.key.324143a2
I1017 19:57:07.728404 427529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.crt.324143a2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1017 19:57:07.988338 427529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.crt.324143a2 ...
I1017 19:57:07.988373 427529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.crt.324143a2: {Name:mk1b5c10f1049183739ae4739fbd2c1db869bddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:07.988593 427529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.key.324143a2 ...
I1017 19:57:07.988613 427529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.key.324143a2: {Name:mk8d8538d94c89c18b3e75ecfbf35360e11966cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:07.988728 427529 certs.go:382] copying /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.crt.324143a2 -> /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.crt
I1017 19:57:07.988860 427529 certs.go:386] copying /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.key.324143a2 -> /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.key
I1017 19:57:07.988961 427529 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/proxy-client.key
I1017 19:57:07.988983 427529 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/proxy-client.crt with IP's: []
I1017 19:57:08.239577 427529 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/proxy-client.crt ...
I1017 19:57:08.239604 427529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/proxy-client.crt: {Name:mk31c0c23f5f552a82c962729647d507a5d55a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:08.239772 427529 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/proxy-client.key ...
I1017 19:57:08.239785 427529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/proxy-client.key: {Name:mk5cb0365becb8771e9c0c7d20bfeb7841ff1bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:08.239982 427529 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-14234/.minikube/certs/17802.pem (1338 bytes)
W1017 19:57:08.240018 427529 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-14234/.minikube/certs/17802_empty.pem, impossibly tiny 0 bytes
I1017 19:57:08.240029 427529 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca-key.pem (1675 bytes)
I1017 19:57:08.240052 427529 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-14234/.minikube/certs/ca.pem (1078 bytes)
I1017 19:57:08.240074 427529 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-14234/.minikube/certs/cert.pem (1123 bytes)
I1017 19:57:08.240096 427529 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-14234/.minikube/certs/key.pem (1679 bytes)
I1017 19:57:08.240136 427529 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-14234/.minikube/files/etc/ssl/certs/178022.pem (1708 bytes)
I1017 19:57:08.240733 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1017 19:57:08.263135 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1017 19:57:08.288265 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1017 19:57:08.309736 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1017 19:57:08.330635 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1017 19:57:08.353178 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1017 19:57:08.381329 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1017 19:57:08.405376 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/profiles/kubenet-895879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1017 19:57:08.425744 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1017 19:57:08.450926 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/certs/17802.pem --> /usr/share/ca-certificates/17802.pem (1338 bytes)
I1017 19:57:08.477245 427529 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-14234/.minikube/files/etc/ssl/certs/178022.pem --> /usr/share/ca-certificates/178022.pem (1708 bytes)
I1017 19:57:08.497383 427529 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1017 19:57:08.512006 427529 ssh_runner.go:195] Run: openssl version
I1017 19:57:08.519046 427529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1017 19:57:08.530613 427529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1017 19:57:08.535364 427529 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:13 /usr/share/ca-certificates/minikubeCA.pem
I1017 19:57:08.535437 427529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1017 19:57:08.597514 427529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1017 19:57:08.608183 427529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17802.pem && ln -fs /usr/share/ca-certificates/17802.pem /etc/ssl/certs/17802.pem"
I1017 19:57:08.618271 427529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17802.pem
I1017 19:57:08.622822 427529 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 19:18 /usr/share/ca-certificates/17802.pem
I1017 19:57:08.622891 427529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17802.pem
I1017 19:57:08.663099 427529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17802.pem /etc/ssl/certs/51391683.0"
I1017 19:57:08.674264 427529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178022.pem && ln -fs /usr/share/ca-certificates/178022.pem /etc/ssl/certs/178022.pem"
I1017 19:57:08.684910 427529 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178022.pem
I1017 19:57:08.689766 427529 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 19:18 /usr/share/ca-certificates/178022.pem
I1017 19:57:08.689839 427529 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178022.pem
I1017 19:57:08.728040 427529 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178022.pem /etc/ssl/certs/3ec20f2e.0"
I1017 19:57:08.738316 427529 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1017 19:57:08.742838 427529 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1017 19:57:08.742929 427529 kubeadm.go:400] StartCluster: {Name:kubenet-895879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubenet-895879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1017 19:57:08.743059 427529 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1017 19:57:08.764994 427529 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1017 19:57:08.775429 427529 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1017 19:57:08.784663 427529 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1017 19:57:08.784727 427529 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1017 19:57:08.794184 427529 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1017 19:57:08.794209 427529 kubeadm.go:157] found existing configuration files:
I1017 19:57:08.794259 427529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1017 19:57:08.803438 427529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1017 19:57:08.803506 427529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1017 19:57:08.812455 427529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1017 19:57:08.822050 427529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1017 19:57:08.822107 427529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1017 19:57:08.830680 427529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1017 19:57:08.839775 427529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1017 19:57:08.839840 427529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1017 19:57:08.848309 427529 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1017 19:57:08.857302 427529 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1017 19:57:08.857360 427529 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1017 19:57:08.866110 427529 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1017 19:57:08.911980 427529 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
I1017 19:57:08.912058 427529 kubeadm.go:318] [preflight] Running pre-flight checks
I1017 19:57:08.938476 427529 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
I1017 19:57:08.938562 427529 kubeadm.go:318] [0;37mKERNEL_VERSION[0m: [0;32m6.8.0-1041-gcp[0m
I1017 19:57:08.938620 427529 kubeadm.go:318] [0;37mOS[0m: [0;32mLinux[0m
I1017 19:57:08.938686 427529 kubeadm.go:318] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1017 19:57:08.938747 427529 kubeadm.go:318] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1017 19:57:08.938816 427529 kubeadm.go:318] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1017 19:57:08.938885 427529 kubeadm.go:318] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1017 19:57:08.938952 427529 kubeadm.go:318] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1017 19:57:08.939010 427529 kubeadm.go:318] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1017 19:57:08.939080 427529 kubeadm.go:318] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1017 19:57:08.939157 427529 kubeadm.go:318] [0;37mCGROUPS_IO[0m: [0;32menabled[0m
I1017 19:57:09.005504 427529 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
I1017 19:57:09.005651 427529 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1017 19:57:09.005791 427529 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1017 19:57:09.020490 427529 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1017 19:57:09.024491 427529 out.go:252] - Generating certificates and keys ...
I1017 19:57:09.024615 427529 kubeadm.go:318] [certs] Using existing ca certificate authority
I1017 19:57:09.024707 427529 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
I1017 19:57:09.523389 427529 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
I1017 19:57:09.792118 427529 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
I1017 19:57:10.026031 427529 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
I1017 19:57:10.099895 427529 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
I1017 19:57:10.588079 427529 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
I1017 19:57:10.588337 427529 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kubenet-895879 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1017 19:57:10.963596 427529 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
I1017 19:57:10.963766 427529 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kubenet-895879 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I1017 19:57:11.608061 427529 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
I1017 19:57:12.003638 427529 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
I1017 19:57:12.184455 427529 kubeadm.go:318] [certs] Generating "sa" key and public key
I1017 19:57:12.184597 427529 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1017 19:57:12.667853 427529 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
I1017 19:57:13.734118 427529 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1017 19:57:14.244736 427529 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1017 19:57:14.469362 427529 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1017 19:57:14.611108 427529 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1017 19:57:14.611657 427529 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1017 19:57:14.615586 427529 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1017 19:57:14.618265 427529 out.go:252] - Booting up control plane ...
I1017 19:57:14.618419 427529 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1017 19:57:14.618524 427529 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1017 19:57:14.618612 427529 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1017 19:57:14.635581 427529 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1017 19:57:14.635751 427529 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1017 19:57:14.645024 427529 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1017 19:57:14.645411 427529 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1017 19:57:14.645496 427529 kubeadm.go:318] [kubelet-start] Starting the kubelet
I1017 19:57:14.769821 427529 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1017 19:57:14.770008 427529 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1017 19:57:15.770761 427529 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001933232s
I1017 19:57:15.775704 427529 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1017 19:57:15.775794 427529 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
I1017 19:57:15.775960 427529 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1017 19:57:15.776080 427529 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1017 19:57:17.724704 427529 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.949203529s
I1017 19:57:19.070773 427529 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.29551324s
I1017 19:57:21.278458 427529 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.503139177s
I1017 19:57:21.292544 427529 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1017 19:57:21.307109 427529 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1017 19:57:21.320698 427529 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
I1017 19:57:21.320996 427529 kubeadm.go:318] [mark-control-plane] Marking the node kubenet-895879 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1017 19:57:21.336181 427529 kubeadm.go:318] [bootstrap-token] Using token: txi92y.4amjep3gnhvvuxnn
I1017 19:57:21.337867 427529 out.go:252] - Configuring RBAC rules ...
I1017 19:57:21.338064 427529 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1017 19:57:21.343772 427529 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1017 19:57:21.353788 427529 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1017 19:57:21.358403 427529 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1017 19:57:21.362981 427529 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1017 19:57:21.367237 427529 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1017 19:57:21.686635 427529 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1017 19:57:22.108468 427529 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
I1017 19:57:22.685188 427529 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
I1017 19:57:22.686347 427529 kubeadm.go:318]
I1017 19:57:22.686437 427529 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
I1017 19:57:22.686478 427529 kubeadm.go:318]
I1017 19:57:22.686611 427529 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
I1017 19:57:22.686623 427529 kubeadm.go:318]
I1017 19:57:22.686668 427529 kubeadm.go:318] mkdir -p $HOME/.kube
I1017 19:57:22.686751 427529 kubeadm.go:318] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1017 19:57:22.686825 427529 kubeadm.go:318] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1017 19:57:22.686836 427529 kubeadm.go:318]
I1017 19:57:22.686938 427529 kubeadm.go:318] Alternatively, if you are the root user, you can run:
I1017 19:57:22.686948 427529 kubeadm.go:318]
I1017 19:57:22.687010 427529 kubeadm.go:318] export KUBECONFIG=/etc/kubernetes/admin.conf
I1017 19:57:22.687021 427529 kubeadm.go:318]
I1017 19:57:22.687102 427529 kubeadm.go:318] You should now deploy a pod network to the cluster.
I1017 19:57:22.687197 427529 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1017 19:57:22.687299 427529 kubeadm.go:318] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1017 19:57:22.687316 427529 kubeadm.go:318]
I1017 19:57:22.687427 427529 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
I1017 19:57:22.687542 427529 kubeadm.go:318] and service account keys on each node and then running the following as root:
I1017 19:57:22.687557 427529 kubeadm.go:318]
I1017 19:57:22.687667 427529 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token txi92y.4amjep3gnhvvuxnn \
I1017 19:57:22.687825 427529 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:84e4e31d92a096aeb1438d67005afd45eb9212075ad3706839d22080668a957b \
I1017 19:57:22.687880 427529 kubeadm.go:318] --control-plane
I1017 19:57:22.687889 427529 kubeadm.go:318]
I1017 19:57:22.687988 427529 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
I1017 19:57:22.687997 427529 kubeadm.go:318]
I1017 19:57:22.688103 427529 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token txi92y.4amjep3gnhvvuxnn \
I1017 19:57:22.688231 427529 kubeadm.go:318] --discovery-token-ca-cert-hash sha256:84e4e31d92a096aeb1438d67005afd45eb9212075ad3706839d22080668a957b
I1017 19:57:22.691316 427529 kubeadm.go:318] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
I1017 19:57:22.691440 427529 kubeadm.go:318] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1017 19:57:22.691454 427529 cni.go:80] network plugin configured as "kubenet", returning disabled
I1017 19:57:22.691474 427529 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1017 19:57:22.691591 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:22.691612 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-895879 minikube.k8s.io/updated_at=2025_10_17T19_57_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=kubenet-895879 minikube.k8s.io/primary=true
I1017 19:57:22.705555 427529 ops.go:34] apiserver oom_adj: -16
I1017 19:57:22.779000 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:23.279748 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:23.779123 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:24.280069 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:24.779569 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:25.279112 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:25.780059 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:26.281663 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:26.779155 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:27.279536 427529 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1017 19:57:27.370200 427529 kubeadm.go:1113] duration metric: took 4.67867078s to wait for elevateKubeSystemPrivileges
I1017 19:57:27.370237 427529 kubeadm.go:402] duration metric: took 18.627313406s to StartCluster
I1017 19:57:27.370257 427529 settings.go:142] acquiring lock: {Name:mkc3f7d8b400bb3b498591fec2549163b51ee00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:27.370356 427529 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21664-14234/kubeconfig
I1017 19:57:27.371895 427529 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-14234/kubeconfig: {Name:mk2cc1f5f966737afcd6bbfe03ee8d3e040b711c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1017 19:57:27.372165 427529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1017 19:57:27.372160 427529 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I1017 19:57:27.372259 427529 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1017 19:57:27.372352 427529 config.go:182] Loaded profile config "kubenet-895879": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1017 19:57:27.372364 427529 addons.go:69] Setting storage-provisioner=true in profile "kubenet-895879"
I1017 19:57:27.372383 427529 addons.go:238] Setting addon storage-provisioner=true in "kubenet-895879"
I1017 19:57:27.372412 427529 addons.go:69] Setting default-storageclass=true in profile "kubenet-895879"
I1017 19:57:27.372447 427529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-895879"
I1017 19:57:27.372418 427529 host.go:66] Checking if "kubenet-895879" exists ...
I1017 19:57:27.372897 427529 cli_runner.go:164] Run: docker container inspect kubenet-895879 --format={{.State.Status}}
I1017 19:57:27.373078 427529 cli_runner.go:164] Run: docker container inspect kubenet-895879 --format={{.State.Status}}
I1017 19:57:27.374253 427529 out.go:179] * Verifying Kubernetes components...
I1017 19:57:27.376425 427529 ssh_runner.go:195] Run: sudo systemctl daemon-reload
W1017 19:57:27.399184 427529 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error getting storagev1 interface client config: context "kubenet-895879" does not exist : client config: context "kubenet-895879" does not exist]
! Enabling 'default-storageclass' returned an error: running callbacks: [Error getting storagev1 interface client config: context "kubenet-895879" does not exist : client config: context "kubenet-895879" does not exist]
I1017 19:57:27.403557 427529 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1017 19:57:27.405229 427529 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1017 19:57:27.405255 427529 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1017 19:57:27.405360 427529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-895879
I1017 19:57:27.445815 427529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21664-14234/.minikube/machines/kubenet-895879/id_rsa Username:docker}
I1017 19:57:27.493987 427529 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1017 19:57:27.562020 427529 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1017 19:57:27.584845 427529 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1017 19:57:27.848899 427529 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
E1017 19:57:27.849430 427529 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: client: client config: context "kubenet-895879" does not exist
I1017 19:57:27.856252 427529 out.go:203]
W1017 19:57:27.858916 427529 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: kubernetes client: client config: client config: context "kubenet-895879" does not exist
X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: kubernetes client: client config: client config: context "kubenet-895879" does not exist
W1017 19:57:27.858951 427529 out.go:285] *
*
W1017 19:57:27.861322 427529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1017 19:57:27.866566 427529 out.go:203]
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (34.88s)