=== RUN TestOffline
=== PAUSE TestOffline
=== CONT TestOffline
aab_offline_test.go:55: (dbg) Run: out/minikube-linux-amd64 start -p offline-docker-649313 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=docker
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p offline-docker-649313 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=docker: signal: killed (15m0.005840455s)
-- stdout --
* [offline-docker-649313] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20317
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "offline-docker-649313" primary control-plane node in "offline-docker-649313" cluster
* Pulling base image v0.0.46 ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Found network options:
- HTTP_PROXY=172.16.1.1:1
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.32.1 on Docker 27.4.1 ...
- env HTTP_PROXY=172.16.1.1:1
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0127 12:43:50.252629 569503 out.go:345] Setting OutFile to fd 1 ...
I0127 12:43:50.252907 569503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:43:50.252917 569503 out.go:358] Setting ErrFile to fd 2...
I0127 12:43:50.252924 569503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:43:50.253117 569503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
I0127 12:43:50.253774 569503 out.go:352] Setting JSON to false
I0127 12:43:50.254766 569503 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":30377,"bootTime":1737951453,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 12:43:50.254839 569503 start.go:139] virtualization: kvm guest
I0127 12:43:50.257110 569503 out.go:177] * [offline-docker-649313] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 12:43:50.259049 569503 notify.go:220] Checking for updates...
I0127 12:43:50.259073 569503 out.go:177] - MINIKUBE_LOCATION=20317
I0127 12:43:50.260819 569503 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 12:43:50.262145 569503 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
I0127 12:43:50.263276 569503 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
I0127 12:43:50.264534 569503 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 12:43:50.266222 569503 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 12:43:50.268099 569503 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 12:43:50.296647 569503 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0127 12:43:50.296793 569503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 12:43:50.360029 569503 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2025-01-27 12:43:50.346883761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0127 12:43:50.360194 569503 docker.go:318] overlay module found
I0127 12:43:50.361842 569503 out.go:177] * Using the docker driver based on user configuration
I0127 12:43:50.363056 569503 start.go:297] selected driver: docker
I0127 12:43:50.363071 569503 start.go:901] validating driver "docker" against <nil>
I0127 12:43:50.363097 569503 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 12:43:50.364273 569503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 12:43:50.436252 569503 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2025-01-27 12:43:50.422490841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0127 12:43:50.436494 569503 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0127 12:43:50.436864 569503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 12:43:50.438566 569503 out.go:177] * Using Docker driver with root privileges
I0127 12:43:50.440226 569503 cni.go:84] Creating CNI manager for ""
I0127 12:43:50.440322 569503 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0127 12:43:50.440333 569503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0127 12:43:50.440444 569503 start.go:340] cluster config:
{Name:offline-docker-649313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:43:50.442019 569503 out.go:177] * Starting "offline-docker-649313" primary control-plane node in "offline-docker-649313" cluster
I0127 12:43:50.443335 569503 cache.go:121] Beginning downloading kic base image for docker with docker
I0127 12:43:50.444678 569503 out.go:177] * Pulling base image v0.0.46 ...
I0127 12:43:50.445860 569503 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0127 12:43:50.445918 569503 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
I0127 12:43:50.445951 569503 cache.go:56] Caching tarball of preloaded images
I0127 12:43:50.445997 569503 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0127 12:43:50.446088 569503 preload.go:172] Found /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 12:43:50.446119 569503 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
I0127 12:43:50.446683 569503 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/config.json ...
I0127 12:43:50.446727 569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/config.json: {Name:mk9ddecbecdff2b7295ef3347202aeeaf53c675e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:43:50.480974 569503 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0127 12:43:50.480997 569503 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0127 12:43:50.481019 569503 cache.go:227] Successfully downloaded all kic artifacts
I0127 12:43:50.481063 569503 start.go:360] acquireMachinesLock for offline-docker-649313: {Name:mkc0c4b7197804f1697dc3869952ab8c5283ac8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:43:50.481188 569503 start.go:364] duration metric: took 99.534µs to acquireMachinesLock for "offline-docker-649313"
I0127 12:43:50.481246 569503 start.go:93] Provisioning new machine with config: &{Name:offline-docker-649313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0127 12:43:50.481359 569503 start.go:125] createHost starting for "" (driver="docker")
I0127 12:43:50.483541 569503 out.go:235] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0127 12:43:50.483893 569503 start.go:159] libmachine.API.Create for "offline-docker-649313" (driver="docker")
I0127 12:43:50.483935 569503 client.go:168] LocalClient.Create starting
I0127 12:43:50.484006 569503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem
I0127 12:43:50.484049 569503 main.go:141] libmachine: Decoding PEM data...
I0127 12:43:50.484070 569503 main.go:141] libmachine: Parsing certificate...
I0127 12:43:50.484166 569503 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem
I0127 12:43:50.484221 569503 main.go:141] libmachine: Decoding PEM data...
I0127 12:43:50.484241 569503 main.go:141] libmachine: Parsing certificate...
I0127 12:43:50.484720 569503 cli_runner.go:164] Run: docker network inspect offline-docker-649313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 12:43:50.512526 569503 cli_runner.go:211] docker network inspect offline-docker-649313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 12:43:50.512625 569503 network_create.go:284] running [docker network inspect offline-docker-649313] to gather additional debugging logs...
I0127 12:43:50.512647 569503 cli_runner.go:164] Run: docker network inspect offline-docker-649313
W0127 12:43:50.537912 569503 cli_runner.go:211] docker network inspect offline-docker-649313 returned with exit code 1
I0127 12:43:50.537941 569503 network_create.go:287] error running [docker network inspect offline-docker-649313]: docker network inspect offline-docker-649313: exit status 1
stdout:
[]
stderr:
Error response from daemon: network offline-docker-649313 not found
I0127 12:43:50.537973 569503 network_create.go:289] output of [docker network inspect offline-docker-649313]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network offline-docker-649313 not found
** /stderr **
I0127 12:43:50.538265 569503 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 12:43:50.559488 569503 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a67733940b1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:47:92:de:9e} reservation:<nil>}
I0127 12:43:50.560755 569503 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-526e8be49203 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:00:a4:5e:8f} reservation:<nil>}
I0127 12:43:50.562386 569503 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1505344accd1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ee:63:1d:4f} reservation:<nil>}
I0127 12:43:50.563769 569503 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001710c60}
I0127 12:43:50.563805 569503 network_create.go:124] attempt to create docker network offline-docker-649313 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0127 12:43:50.564010 569503 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-649313 offline-docker-649313
I0127 12:43:50.644070 569503 network_create.go:108] docker network offline-docker-649313 192.168.76.0/24 created
I0127 12:43:50.644105 569503 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-649313" container
I0127 12:43:50.644187 569503 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0127 12:43:50.665074 569503 cli_runner.go:164] Run: docker volume create offline-docker-649313 --label name.minikube.sigs.k8s.io=offline-docker-649313 --label created_by.minikube.sigs.k8s.io=true
I0127 12:43:50.688658 569503 oci.go:103] Successfully created a docker volume offline-docker-649313
I0127 12:43:50.688738 569503 cli_runner.go:164] Run: docker run --rm --name offline-docker-649313-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-649313 --entrypoint /usr/bin/test -v offline-docker-649313:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
I0127 12:43:52.136662 569503 cli_runner.go:217] Completed: docker run --rm --name offline-docker-649313-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-649313 --entrypoint /usr/bin/test -v offline-docker-649313:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (1.4478855s)
I0127 12:43:52.136699 569503 oci.go:107] Successfully prepared a docker volume offline-docker-649313
I0127 12:43:52.136747 569503 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0127 12:43:52.136776 569503 kic.go:194] Starting extracting preloaded images to volume ...
I0127 12:43:52.136871 569503 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-649313:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
I0127 12:44:00.425946 569503 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-649313:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (8.289024823s)
I0127 12:44:00.425979 569503 kic.go:203] duration metric: took 8.289197577s to extract preloaded images to volume ...
W0127 12:44:00.426091 569503 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0127 12:44:00.426181 569503 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0127 12:44:00.477148 569503 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname offline-docker-649313 --name offline-docker-649313 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-649313 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=offline-docker-649313 --network offline-docker-649313 --ip 192.168.76.2 --volume offline-docker-649313:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
I0127 12:44:00.836408 569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Running}}
I0127 12:44:00.856034 569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
I0127 12:44:00.877164 569503 cli_runner.go:164] Run: docker exec offline-docker-649313 stat /var/lib/dpkg/alternatives/iptables
I0127 12:44:00.932757 569503 oci.go:144] the created container "offline-docker-649313" has a running status.
I0127 12:44:00.932790 569503 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa...
I0127 12:44:01.140333 569503 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0127 12:44:01.170653 569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
I0127 12:44:01.193865 569503 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0127 12:44:01.193898 569503 kic_runner.go:114] Args: [docker exec --privileged offline-docker-649313 chown docker:docker /home/docker/.ssh/authorized_keys]
I0127 12:44:01.274058 569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
I0127 12:44:01.321813 569503 machine.go:93] provisionDockerMachine start ...
I0127 12:44:01.321918 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:01.339339 569503 main.go:141] libmachine: Using SSH client type: native
I0127 12:44:01.339566 569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 32984 <nil> <nil>}
I0127 12:44:01.339576 569503 main.go:141] libmachine: About to run SSH command:
hostname
I0127 12:44:01.547608 569503 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-649313
I0127 12:44:01.547637 569503 ubuntu.go:169] provisioning hostname "offline-docker-649313"
I0127 12:44:01.547688 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:01.569390 569503 main.go:141] libmachine: Using SSH client type: native
I0127 12:44:01.569616 569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 32984 <nil> <nil>}
I0127 12:44:01.569626 569503 main.go:141] libmachine: About to run SSH command:
sudo hostname offline-docker-649313 && echo "offline-docker-649313" | sudo tee /etc/hostname
I0127 12:44:01.711696 569503 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-649313
I0127 12:44:01.711781 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:01.731509 569503 main.go:141] libmachine: Using SSH client type: native
I0127 12:44:01.731734 569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 32984 <nil> <nil>}
I0127 12:44:01.731761 569503 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\soffline-docker-649313' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-649313/g' /etc/hosts;
else
echo '127.0.1.1 offline-docker-649313' | sudo tee -a /etc/hosts;
fi
fi
I0127 12:44:01.864345 569503 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 12:44:01.864381 569503 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20317-304536/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-304536/.minikube}
I0127 12:44:01.864406 569503 ubuntu.go:177] setting up certificates
I0127 12:44:01.864418 569503 provision.go:84] configureAuth start
I0127 12:44:01.864488 569503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-649313
I0127 12:44:01.881735 569503 provision.go:143] copyHostCerts
I0127 12:44:01.881802 569503 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem, removing ...
I0127 12:44:01.881811 569503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem
I0127 12:44:01.881878 569503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem (1082 bytes)
I0127 12:44:01.881972 569503 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem, removing ...
I0127 12:44:01.881980 569503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem
I0127 12:44:01.882002 569503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem (1123 bytes)
I0127 12:44:01.882070 569503 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem, removing ...
I0127 12:44:01.882077 569503 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem
I0127 12:44:01.882096 569503 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem (1679 bytes)
I0127 12:44:01.882156 569503 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem org=jenkins.offline-docker-649313 san=[127.0.0.1 192.168.76.2 localhost minikube offline-docker-649313]
I0127 12:44:01.996130 569503 provision.go:177] copyRemoteCerts
I0127 12:44:01.996210 569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 12:44:01.996265 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:02.013334 569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
I0127 12:44:02.104753 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0127 12:44:02.126022 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 12:44:02.147296 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 12:44:02.168824 569503 provision.go:87] duration metric: took 304.388928ms to configureAuth
I0127 12:44:02.168866 569503 ubuntu.go:193] setting minikube options for container-runtime
I0127 12:44:02.169078 569503 config.go:182] Loaded profile config "offline-docker-649313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:44:02.169150 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:02.185892 569503 main.go:141] libmachine: Using SSH client type: native
I0127 12:44:02.186087 569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 32984 <nil> <nil>}
I0127 12:44:02.186099 569503 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0127 12:44:02.312774 569503 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0127 12:44:02.312800 569503 ubuntu.go:71] root file system type: overlay
I0127 12:44:02.312942 569503 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0127 12:44:02.312999 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:02.330153 569503 main.go:141] libmachine: Using SSH client type: native
I0127 12:44:02.330380 569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 32984 <nil> <nil>}
I0127 12:44:02.330479 569503 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="HTTP_PROXY=172.16.1.1:1"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0127 12:44:02.468083 569503 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=HTTP_PROXY=172.16.1.1:1
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0127 12:44:02.468200 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:02.488599 569503 main.go:141] libmachine: Using SSH client type: native
I0127 12:44:02.488850 569503 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 32984 <nil> <nil>}
I0127 12:44:02.488877 569503 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0127 12:44:03.211959 569503 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-12-17 15:44:19.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-01-27 12:44:02.461683918 +0000
@@ -1,46 +1,50 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Environment=HTTP_PROXY=172.16.1.1:1
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0127 12:44:03.212003 569503 machine.go:96] duration metric: took 1.890166528s to provisionDockerMachine
I0127 12:44:03.212016 569503 client.go:171] duration metric: took 12.728069845s to LocalClient.Create
I0127 12:44:03.212033 569503 start.go:167] duration metric: took 12.728144901s to libmachine.API.Create "offline-docker-649313"
I0127 12:44:03.212044 569503 start.go:293] postStartSetup for "offline-docker-649313" (driver="docker")
I0127 12:44:03.212058 569503 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 12:44:03.212135 569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 12:44:03.212238 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:03.230384 569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
I0127 12:44:03.324960 569503 ssh_runner.go:195] Run: cat /etc/os-release
I0127 12:44:03.328108 569503 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0127 12:44:03.328146 569503 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0127 12:44:03.328165 569503 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0127 12:44:03.328202 569503 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0127 12:44:03.328222 569503 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-304536/.minikube/addons for local assets ...
I0127 12:44:03.328283 569503 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-304536/.minikube/files for local assets ...
I0127 12:44:03.328382 569503 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem -> 3113072.pem in /etc/ssl/certs
I0127 12:44:03.328502 569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 12:44:03.336281 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem --> /etc/ssl/certs/3113072.pem (1708 bytes)
I0127 12:44:03.358027 569503 start.go:296] duration metric: took 145.964414ms for postStartSetup
I0127 12:44:03.358441 569503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-649313
I0127 12:44:03.374811 569503 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/config.json ...
I0127 12:44:03.375068 569503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0127 12:44:03.375108 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:03.392625 569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
I0127 12:44:03.482704 569503 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0127 12:44:03.488697 569503 start.go:128] duration metric: took 13.007318103s to createHost
I0127 12:44:03.488731 569503 start.go:83] releasing machines lock for "offline-docker-649313", held for 13.007527504s
I0127 12:44:03.488827 569503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" offline-docker-649313
I0127 12:44:03.525231 569503 out.go:177] * Found network options:
I0127 12:44:03.526794 569503 out.go:177] - HTTP_PROXY=172.16.1.1:1
W0127 12:44:03.528273 569503 out.go:270] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.76.2).
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.76.2).
I0127 12:44:03.529591 569503 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I0127 12:44:03.531012 569503 ssh_runner.go:195] Run: cat /version.json
I0127 12:44:03.531075 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:03.531097 569503 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 12:44:03.531172 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:03.558001 569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
I0127 12:44:03.561246 569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
I0127 12:44:03.756365 569503 ssh_runner.go:195] Run: systemctl --version
I0127 12:44:03.761676 569503 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0127 12:44:03.767034 569503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0127 12:44:03.801721 569503 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0127 12:44:03.801810 569503 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 12:44:03.835659 569503 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0127 12:44:03.835701 569503 start.go:495] detecting cgroup driver to use...
I0127 12:44:03.835743 569503 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0127 12:44:03.835896 569503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 12:44:03.855206 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 12:44:03.867636 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 12:44:03.880034 569503 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 12:44:03.880122 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 12:44:03.891467 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 12:44:03.903447 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 12:44:03.915635 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 12:44:03.926757 569503 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 12:44:03.936401 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 12:44:03.948478 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 12:44:03.960367 569503 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 12:44:03.973538 569503 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 12:44:03.983261 569503 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 12:44:03.993564 569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:44:04.087443 569503 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 12:44:04.200540 569503 start.go:495] detecting cgroup driver to use...
I0127 12:44:04.200624 569503 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0127 12:44:04.200691 569503 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0127 12:44:04.218794 569503 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0127 12:44:04.218852 569503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 12:44:04.231149 569503 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 12:44:04.249596 569503 ssh_runner.go:195] Run: which cri-dockerd
I0127 12:44:04.253119 569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0127 12:44:04.263643 569503 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0127 12:44:04.285345 569503 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0127 12:44:04.389020 569503 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0127 12:44:04.495152 569503 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0127 12:44:04.495335 569503 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0127 12:44:04.517143 569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:44:04.613860 569503 ssh_runner.go:195] Run: sudo systemctl restart docker
I0127 12:44:07.422519 569503 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.808620933s)
I0127 12:44:07.422583 569503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0127 12:44:07.437810 569503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0127 12:44:07.452521 569503 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0127 12:44:07.560478 569503 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0127 12:44:07.662603 569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:44:07.765306 569503 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0127 12:44:07.782492 569503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0127 12:44:07.794422 569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:44:07.909756 569503 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0127 12:44:07.994285 569503 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0127 12:44:07.994382 569503 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0127 12:44:07.998439 569503 start.go:563] Will wait 60s for crictl version
I0127 12:44:07.998500 569503 ssh_runner.go:195] Run: which crictl
I0127 12:44:08.002291 569503 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 12:44:08.046037 569503 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.4.1
RuntimeApiVersion: v1
I0127 12:44:08.046101 569503 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0127 12:44:08.077605 569503 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0127 12:44:08.118153 569503 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.1 ...
I0127 12:44:08.124279 569503 out.go:177] - env HTTP_PROXY=172.16.1.1:1
I0127 12:44:08.125881 569503 cli_runner.go:164] Run: docker network inspect offline-docker-649313 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 12:44:08.150565 569503 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0127 12:44:08.155440 569503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 12:44:08.198992 569503 kubeadm.go:883] updating cluster {Name:offline-docker-649313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 12:44:08.199135 569503 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0127 12:44:08.199198 569503 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 12:44:08.237471 569503 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0127 12:44:08.237499 569503 docker.go:619] Images already preloaded, skipping extraction
I0127 12:44:08.237568 569503 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 12:44:08.272557 569503 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0127 12:44:08.272584 569503 cache_images.go:84] Images are preloaded, skipping loading
I0127 12:44:08.272596 569503 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.1 docker true true} ...
I0127 12:44:08.272713 569503 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=offline-docker-649313 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 12:44:08.272778 569503 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0127 12:44:08.340922 569503 cni.go:84] Creating CNI manager for ""
I0127 12:44:08.340956 569503 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0127 12:44:08.340972 569503 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 12:44:08.341001 569503 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:offline-docker-649313 NodeName:offline-docker-649313 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 12:44:08.341246 569503 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "offline-docker-649313"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 12:44:08.341786 569503 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 12:44:08.360490 569503 binaries.go:44] Found k8s binaries, skipping transfer
I0127 12:44:08.360568 569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 12:44:08.371926 569503 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
I0127 12:44:08.393465 569503 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 12:44:08.413898 569503 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2299 bytes)
I0127 12:44:08.434503 569503 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0127 12:44:08.439061 569503 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 12:44:08.452614 569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:44:08.557246 569503 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:44:08.573110 569503 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313 for IP: 192.168.76.2
I0127 12:44:08.573141 569503 certs.go:194] generating shared ca certs ...
I0127 12:44:08.573169 569503 certs.go:226] acquiring lock for ca certs: {Name:mk1b16f74c226e2be2c446b7baf1d60d1399508e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:08.573329 569503 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.key
I0127 12:44:08.573387 569503 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.key
I0127 12:44:08.573403 569503 certs.go:256] generating profile certs ...
I0127 12:44:08.573486 569503 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key
I0127 12:44:08.573514 569503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt with IP's: []
I0127 12:44:08.643889 569503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt ...
I0127 12:44:08.643920 569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt: {Name:mk725ab3a72353fd47063c69e20c23063e887de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:08.644076 569503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key ...
I0127 12:44:08.644089 569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key: {Name:mk2c66f11c5ec046d1666625044232dab99b9a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:08.644168 569503 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key.e20ae907
I0127 12:44:08.644208 569503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt.e20ae907 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I0127 12:44:08.846066 569503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt.e20ae907 ...
I0127 12:44:08.846104 569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt.e20ae907: {Name:mkddcd8041839b23dcac607919086c1c2fffddd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:08.846289 569503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key.e20ae907 ...
I0127 12:44:08.846306 569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key.e20ae907: {Name:mkf1a59adbd32ce8d4801c6bfca55113d1ba2215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:08.846416 569503 certs.go:381] copying /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt.e20ae907 -> /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt
I0127 12:44:08.846521 569503 certs.go:385] copying /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key.e20ae907 -> /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key
I0127 12:44:08.846603 569503 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.key
I0127 12:44:08.846634 569503 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.crt with IP's: []
I0127 12:44:08.938885 569503 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.crt ...
I0127 12:44:08.938965 569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.crt: {Name:mkb36f71941b55d205a664a8dfa613e34fda67b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:08.939161 569503 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.key ...
I0127 12:44:08.939208 569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.key: {Name:mk543fe641caf9d8b4f8f6176f603577f528c5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:08.939466 569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307.pem (1338 bytes)
W0127 12:44:08.939535 569503 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307_empty.pem, impossibly tiny 0 bytes
I0127 12:44:08.939552 569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem (1679 bytes)
I0127 12:44:08.939591 569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem (1082 bytes)
I0127 12:44:08.939633 569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem (1123 bytes)
I0127 12:44:08.939661 569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem (1679 bytes)
I0127 12:44:08.939715 569503 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem (1708 bytes)
I0127 12:44:08.940591 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 12:44:08.966541 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 12:44:08.996736 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 12:44:09.023837 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 12:44:09.053893 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0127 12:44:09.096697 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0127 12:44:09.126435 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 12:44:09.155827 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 12:44:09.183391 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 12:44:09.208747 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307.pem --> /usr/share/ca-certificates/311307.pem (1338 bytes)
I0127 12:44:09.237220 569503 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem --> /usr/share/ca-certificates/3113072.pem (1708 bytes)
I0127 12:44:09.282530 569503 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 12:44:09.302279 569503 ssh_runner.go:195] Run: openssl version
I0127 12:44:09.308875 569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3113072.pem && ln -fs /usr/share/ca-certificates/3113072.pem /etc/ssl/certs/3113072.pem"
I0127 12:44:09.320299 569503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3113072.pem
I0127 12:44:09.324394 569503 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:14 /usr/share/ca-certificates/3113072.pem
I0127 12:44:09.324447 569503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3113072.pem
I0127 12:44:09.333344 569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3113072.pem /etc/ssl/certs/3ec20f2e.0"
I0127 12:44:09.345696 569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 12:44:09.357051 569503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 12:44:09.360940 569503 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:09 /usr/share/ca-certificates/minikubeCA.pem
I0127 12:44:09.360995 569503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 12:44:09.369576 569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 12:44:09.380653 569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/311307.pem && ln -fs /usr/share/ca-certificates/311307.pem /etc/ssl/certs/311307.pem"
I0127 12:44:09.391295 569503 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/311307.pem
I0127 12:44:09.395422 569503 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:14 /usr/share/ca-certificates/311307.pem
I0127 12:44:09.395474 569503 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/311307.pem
I0127 12:44:09.403659 569503 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/311307.pem /etc/ssl/certs/51391683.0"
I0127 12:44:09.413600 569503 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 12:44:09.417456 569503 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0127 12:44:09.417511 569503 kubeadm.go:392] StartCluster: {Name:offline-docker-649313 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:offline-docker-649313 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:44:09.417648 569503 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0127 12:44:09.442963 569503 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 12:44:09.456533 569503 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 12:44:09.467137 569503 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0127 12:44:09.467205 569503 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 12:44:09.478120 569503 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 12:44:09.478142 569503 kubeadm.go:157] found existing configuration files:
I0127 12:44:09.478194 569503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 12:44:09.488385 569503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 12:44:09.488443 569503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 12:44:09.498213 569503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 12:44:09.508271 569503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 12:44:09.508381 569503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 12:44:09.518232 569503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 12:44:09.528477 569503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 12:44:09.528528 569503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 12:44:09.537802 569503 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 12:44:09.546871 569503 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 12:44:09.546931 569503 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 12:44:09.559335 569503 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0127 12:44:09.609157 569503 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 12:44:09.609250 569503 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 12:44:09.637990 569503 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0127 12:44:09.638093 569503 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1074-gcp[0m
I0127 12:44:09.638140 569503 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0127 12:44:09.638202 569503 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0127 12:44:09.638264 569503 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0127 12:44:09.638327 569503 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0127 12:44:09.638394 569503 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0127 12:44:09.638459 569503 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0127 12:44:09.638525 569503 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0127 12:44:09.638588 569503 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0127 12:44:09.638655 569503 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0127 12:44:09.638723 569503 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0127 12:44:09.716704 569503 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 12:44:09.716872 569503 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 12:44:09.717074 569503 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 12:44:09.728893 569503 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 12:44:09.734593 569503 out.go:235] - Generating certificates and keys ...
I0127 12:44:09.734734 569503 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 12:44:09.734820 569503 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 12:44:09.877397 569503 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0127 12:44:10.352339 569503 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0127 12:44:10.884423 569503 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0127 12:44:10.984394 569503 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0127 12:44:11.086069 569503 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0127 12:44:11.086353 569503 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost offline-docker-649313] and IPs [192.168.76.2 127.0.0.1 ::1]
I0127 12:44:11.311337 569503 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0127 12:44:11.311737 569503 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost offline-docker-649313] and IPs [192.168.76.2 127.0.0.1 ::1]
I0127 12:44:11.474151 569503 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0127 12:44:12.240459 569503 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0127 12:44:12.481763 569503 kubeadm.go:310] [certs] Generating "sa" key and public key
I0127 12:44:12.481876 569503 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 12:44:12.854762 569503 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 12:44:13.071301 569503 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 12:44:13.189328 569503 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 12:44:13.359691 569503 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 12:44:13.493031 569503 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 12:44:13.493902 569503 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 12:44:13.497545 569503 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 12:44:13.500055 569503 out.go:235] - Booting up control plane ...
I0127 12:44:13.500215 569503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 12:44:13.500330 569503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 12:44:13.501135 569503 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 12:44:13.516510 569503 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 12:44:13.523184 569503 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 12:44:13.523253 569503 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 12:44:13.635973 569503 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 12:44:13.636121 569503 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 12:44:14.637433 569503 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001504662s
I0127 12:44:14.637569 569503 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 12:44:22.139494 569503 kubeadm.go:310] [api-check] The API server is healthy after 7.502026371s
I0127 12:44:22.152244 569503 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 12:44:22.162955 569503 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 12:44:22.184311 569503 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 12:44:22.184653 569503 kubeadm.go:310] [mark-control-plane] Marking the node offline-docker-649313 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 12:44:22.193280 569503 kubeadm.go:310] [bootstrap-token] Using token: 2uq4yb.npayucp7r9tcyqdf
I0127 12:44:22.194774 569503 out.go:235] - Configuring RBAC rules ...
I0127 12:44:22.194934 569503 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 12:44:22.199370 569503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 12:44:22.206358 569503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 12:44:22.208921 569503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 12:44:22.211572 569503 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 12:44:22.214418 569503 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 12:44:22.546062 569503 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 12:44:22.992475 569503 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 12:44:23.545784 569503 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 12:44:23.546801 569503 kubeadm.go:310]
I0127 12:44:23.546876 569503 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 12:44:23.546885 569503 kubeadm.go:310]
I0127 12:44:23.546977 569503 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 12:44:23.546995 569503 kubeadm.go:310]
I0127 12:44:23.547019 569503 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 12:44:23.547094 569503 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 12:44:23.547168 569503 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 12:44:23.547180 569503 kubeadm.go:310]
I0127 12:44:23.547224 569503 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 12:44:23.547231 569503 kubeadm.go:310]
I0127 12:44:23.547301 569503 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 12:44:23.547313 569503 kubeadm.go:310]
I0127 12:44:23.547401 569503 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 12:44:23.547517 569503 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 12:44:23.547625 569503 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 12:44:23.547639 569503 kubeadm.go:310]
I0127 12:44:23.547781 569503 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 12:44:23.547907 569503 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 12:44:23.547918 569503 kubeadm.go:310]
I0127 12:44:23.547991 569503 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2uq4yb.npayucp7r9tcyqdf \
I0127 12:44:23.548080 569503 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0317d02b8a760fcff4e86e4d275bff52eb4bb604f5db424953dcbe540e77a46a \
I0127 12:44:23.548117 569503 kubeadm.go:310] --control-plane
I0127 12:44:23.548129 569503 kubeadm.go:310]
I0127 12:44:23.548271 569503 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 12:44:23.548284 569503 kubeadm.go:310]
I0127 12:44:23.548385 569503 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2uq4yb.npayucp7r9tcyqdf \
I0127 12:44:23.548472 569503 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0317d02b8a760fcff4e86e4d275bff52eb4bb604f5db424953dcbe540e77a46a
I0127 12:44:23.550434 569503 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0127 12:44:23.550629 569503 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1074-gcp\n", err: exit status 1
I0127 12:44:23.550735 569503 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 12:44:23.550758 569503 cni.go:84] Creating CNI manager for ""
I0127 12:44:23.550776 569503 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0127 12:44:23.553335 569503 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 12:44:23.554564 569503 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 12:44:23.563295 569503 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 12:44:23.579503 569503 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 12:44:23.579647 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes offline-docker-649313 minikube.k8s.io/updated_at=2025_01_27T12_44_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=offline-docker-649313 minikube.k8s.io/primary=true
I0127 12:44:23.579654 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:23.588351 569503 ops.go:34] apiserver oom_adj: -16
I0127 12:44:23.678210 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:24.178524 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:24.678472 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:25.179124 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:25.679198 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:26.179246 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:26.679080 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:27.178423 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:27.679194 569503 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:44:27.774956 569503 kubeadm.go:1113] duration metric: took 4.195360661s to wait for elevateKubeSystemPrivileges
I0127 12:44:27.774994 569503 kubeadm.go:394] duration metric: took 18.357488722s to StartCluster
I0127 12:44:27.775018 569503 settings.go:142] acquiring lock: {Name:mk55dbc0704f2f9d31c80856a45552242884623b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:27.775096 569503 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20317-304536/kubeconfig
I0127 12:44:27.776545 569503 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/kubeconfig: {Name:mk59d9102d1fe380f0fe65cd8c2acffe42bba157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:44:27.776825 569503 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0127 12:44:27.777057 569503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0127 12:44:27.777155 569503 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 12:44:27.777241 569503 addons.go:69] Setting storage-provisioner=true in profile "offline-docker-649313"
I0127 12:44:27.777268 569503 addons.go:238] Setting addon storage-provisioner=true in "offline-docker-649313"
I0127 12:44:27.777303 569503 host.go:66] Checking if "offline-docker-649313" exists ...
I0127 12:44:27.777317 569503 config.go:182] Loaded profile config "offline-docker-649313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:44:27.777379 569503 addons.go:69] Setting default-storageclass=true in profile "offline-docker-649313"
I0127 12:44:27.777396 569503 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "offline-docker-649313"
I0127 12:44:27.777723 569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
I0127 12:44:27.777844 569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
I0127 12:44:27.778947 569503 out.go:177] * Verifying Kubernetes components...
I0127 12:44:27.780238 569503 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:44:27.811588 569503 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 12:44:27.813011 569503 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:44:27.813035 569503 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 12:44:27.813115 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:27.819046 569503 kapi.go:59] client config for offline-docker-649313: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key", CAFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 12:44:27.819717 569503 addons.go:238] Setting addon default-storageclass=true in "offline-docker-649313"
I0127 12:44:27.819754 569503 host.go:66] Checking if "offline-docker-649313" exists ...
I0127 12:44:27.820089 569503 cli_runner.go:164] Run: docker container inspect offline-docker-649313 --format={{.State.Status}}
I0127 12:44:27.836922 569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
I0127 12:44:27.845939 569503 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 12:44:27.845966 569503 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 12:44:27.846026 569503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-649313
I0127 12:44:27.863948 569503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/offline-docker-649313/id_rsa Username:docker}
I0127 12:44:27.919767 569503 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0127 12:44:27.985260 569503 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:44:27.985600 569503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:44:27.993994 569503 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 12:44:28.470697 569503 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
I0127 12:44:28.471817 569503 kapi.go:59] client config for offline-docker-649313: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key", CAFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0127 12:44:28.722116 569503 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "offline-docker-649313" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
E0127 12:44:28.722150 569503 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
I0127 12:44:28.911477 569503 kapi.go:59] client config for offline-docker-649313: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/profiles/offline-docker-649313/client.key", CAFile:"/home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 12:44:28.911771 569503 node_ready.go:35] waiting up to 6m0s for node "offline-docker-649313" to be "Ready" ...
I0127 12:44:28.912085 569503 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0127 12:44:28.912115 569503 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0127 12:44:28.915967 569503 node_ready.go:49] node "offline-docker-649313" has status "Ready":"True"
I0127 12:44:28.915992 569503 node_ready.go:38] duration metric: took 4.186192ms for node "offline-docker-649313" to be "Ready" ...
I0127 12:44:28.916006 569503 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:44:28.923003 569503 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0127 12:44:28.924514 569503 addons.go:514] duration metric: took 1.147359304s for enable addons: enabled=[storage-provisioner default-storageclass]
I0127 12:44:28.925005 569503 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace to be "Ready" ...
I0127 12:44:30.931210 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:32.932004 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:35.431086 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:37.931014 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:40.432114 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:42.434906 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:44.931100 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:46.931843 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:49.431763 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:51.930608 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:53.930751 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:55.931544 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:44:58.432234 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:45:00.931364 569503 pod_ready.go:103] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"False"
I0127 12:45:02.431948 569503 pod_ready.go:93] pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace has status "Ready":"True"
I0127 12:45:02.431984 569503 pod_ready.go:82] duration metric: took 33.506958094s for pod "coredns-668d6bf9bc-6nkx4" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.432003 569503 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-7rv77" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.436910 569503 pod_ready.go:93] pod "coredns-668d6bf9bc-7rv77" in "kube-system" namespace has status "Ready":"True"
I0127 12:45:02.436933 569503 pod_ready.go:82] duration metric: took 4.921663ms for pod "coredns-668d6bf9bc-7rv77" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.436944 569503 pod_ready.go:79] waiting up to 6m0s for pod "etcd-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.441105 569503 pod_ready.go:93] pod "etcd-offline-docker-649313" in "kube-system" namespace has status "Ready":"True"
I0127 12:45:02.441135 569503 pod_ready.go:82] duration metric: took 4.182745ms for pod "etcd-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.441148 569503 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.446059 569503 pod_ready.go:93] pod "kube-apiserver-offline-docker-649313" in "kube-system" namespace has status "Ready":"True"
I0127 12:45:02.446087 569503 pod_ready.go:82] duration metric: took 4.928806ms for pod "kube-apiserver-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.446101 569503 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.450024 569503 pod_ready.go:93] pod "kube-controller-manager-offline-docker-649313" in "kube-system" namespace has status "Ready":"True"
I0127 12:45:02.450088 569503 pod_ready.go:82] duration metric: took 3.977861ms for pod "kube-controller-manager-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.450114 569503 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nwtdt" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.829725 569503 pod_ready.go:93] pod "kube-proxy-nwtdt" in "kube-system" namespace has status "Ready":"True"
I0127 12:45:02.829760 569503 pod_ready.go:82] duration metric: took 379.627424ms for pod "kube-proxy-nwtdt" in "kube-system" namespace to be "Ready" ...
I0127 12:45:02.829775 569503 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
I0127 12:45:03.228873 569503 pod_ready.go:93] pod "kube-scheduler-offline-docker-649313" in "kube-system" namespace has status "Ready":"True"
I0127 12:45:03.228896 569503 pod_ready.go:82] duration metric: took 399.11306ms for pod "kube-scheduler-offline-docker-649313" in "kube-system" namespace to be "Ready" ...
I0127 12:45:03.228909 569503 pod_ready.go:39] duration metric: took 34.312891738s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:45:03.228926 569503 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:45:03.228977 569503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:45:03.240680 569503 api_server.go:72] duration metric: took 35.46381462s to wait for apiserver process to appear ...
I0127 12:45:03.240707 569503 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:45:03.240732 569503 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0127 12:45:03.244842 569503 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0127 12:45:03.245735 569503 api_server.go:141] control plane version: v1.32.1
I0127 12:45:03.245759 569503 api_server.go:131] duration metric: took 5.045728ms to wait for apiserver health ...
I0127 12:45:03.245768 569503 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:45:03.431863 569503 system_pods.go:59] 8 kube-system pods found
I0127 12:45:03.431895 569503 system_pods.go:61] "coredns-668d6bf9bc-6nkx4" [44bc4f70-dd40-4791-864c-0458af6a5fe8] Running
I0127 12:45:03.431900 569503 system_pods.go:61] "coredns-668d6bf9bc-7rv77" [d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6] Running
I0127 12:45:03.431903 569503 system_pods.go:61] "etcd-offline-docker-649313" [2d38e1ea-f32a-48e3-a76a-3f528870d44f] Running
I0127 12:45:03.431907 569503 system_pods.go:61] "kube-apiserver-offline-docker-649313" [bbc9118e-6f26-4183-9206-d53c19d12309] Running
I0127 12:45:03.431911 569503 system_pods.go:61] "kube-controller-manager-offline-docker-649313" [58a91e3e-1ab9-4516-82f0-63c2d864c1ee] Running
I0127 12:45:03.431913 569503 system_pods.go:61] "kube-proxy-nwtdt" [a2371845-b951-4e52-9c2a-01a394a9b403] Running
I0127 12:45:03.431916 569503 system_pods.go:61] "kube-scheduler-offline-docker-649313" [5785bcf7-128b-48bd-aaf2-42bdb490bdb7] Running
I0127 12:45:03.431919 569503 system_pods.go:61] "storage-provisioner" [56cf6fce-41be-4b78-9a32-86e8e902d97c] Running
I0127 12:45:03.431925 569503 system_pods.go:74] duration metric: took 186.15091ms to wait for pod list to return data ...
I0127 12:45:03.431933 569503 default_sa.go:34] waiting for default service account to be created ...
I0127 12:45:03.629572 569503 default_sa.go:45] found service account: "default"
I0127 12:45:03.629608 569503 default_sa.go:55] duration metric: took 197.667178ms for default service account to be created ...
I0127 12:45:03.629621 569503 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 12:45:03.832085 569503 system_pods.go:87] 8 kube-system pods found
** /stderr **
aab_offline_test.go:58: out/minikube-linux-amd64 start -p offline-docker-649313 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker --container-runtime=docker failed: signal: killed
panic.go:629: *** TestOffline FAILED at 2025-01-27 12:58:50.239414494 +0000 UTC m=+3021.326846654
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect offline-docker-649313
I0127 12:58:50.249825 311307 config.go:182] Loaded profile config "false-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
helpers_test.go:235: (dbg) docker inspect offline-docker-649313:
-- stdout --
[
{
"Id": "60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec",
"Created": "2025-01-27T12:44:00.495468028Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 570991,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-01-27T12:44:00.629049387Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
"ResolvConfPath": "/var/lib/docker/containers/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec/hostname",
"HostsPath": "/var/lib/docker/containers/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec/hosts",
"LogPath": "/var/lib/docker/containers/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec/60587125548add4445e09e61f8ee09e6fae4a08f63db06d68ede002e7d074eec-json.log",
"Name": "/offline-docker-649313",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"offline-docker-649313:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "offline-docker-649313",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/ebf7ea7e216d90472bd8d7f299f22b9ae6d5a426f8bf18f9a65fbe69b13ef271-init/diff:/var/lib/docker/overlay2/d46080dabfd09e849513ff8da7d233565f9a821ed6a2597f6c352e21817feda4/diff",
"MergedDir": "/var/lib/docker/overlay2/ebf7ea7e216d90472bd8d7f299f22b9ae6d5a426f8bf18f9a65fbe69b13ef271/merged",
"UpperDir": "/var/lib/docker/overlay2/ebf7ea7e216d90472bd8d7f299f22b9ae6d5a426f8bf18f9a65fbe69b13ef271/diff",
"WorkDir": "/var/lib/docker/overlay2/ebf7ea7e216d90472bd8d7f299f22b9ae6d5a426f8bf18f9a65fbe69b13ef271/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "offline-docker-649313",
"Source": "/var/lib/docker/volumes/offline-docker-649313/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "offline-docker-649313",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "offline-docker-649313",
"name.minikube.sigs.k8s.io": "offline-docker-649313",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "258c70e2e1b1e9ff20bdf397aaebcc41e12c4bfa092616a709da0b58ba7e207e",
"SandboxKey": "/var/run/docker/netns/258c70e2e1b1",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32984"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32986"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32989"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32987"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32988"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"offline-docker-649313": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "85fc0d82ab8d81d401616a015fd721eade8335a1d19dad0d50d1f59cc93fc120",
"EndpointID": "7b3011dbfb414aafc45bde98ea2fe15d11a70c649bfce95abe0d4c36a18fc7dd",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"offline-docker-649313",
"60587125548a"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p offline-docker-649313 -n offline-docker-649313
helpers_test.go:244: <<< TestOffline FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestOffline]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p offline-docker-649313 logs -n 25
helpers_test.go:252: TestOffline logs:
-- stdout --
==> Audit <==
|---------|---------------------------------|-----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------|-----------------------|---------|---------|---------------------|---------------------|
| ssh | -p custom-flannel-244099 pgrep | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | -a kubelet | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | cat /etc/nsswitch.conf | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | cat /etc/hosts | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | cat /etc/resolv.conf | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | crictl pods | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | crictl ps --all | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | find /etc/cni -type f -exec sh | | | | | |
| | -c 'echo {}; cat {}' \; | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | ip a s | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | ip r s | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | iptables-save | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | iptables -t nat -L -n -v | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | cat /run/flannel/subnet.env | | | | | |
| ssh | -p custom-flannel-244099 | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | |
| | sudo cat | | | | | |
| | /etc/kube-flannel/cni-conf.json | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | systemctl status kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p custom-flannel-244099 | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | sudo systemctl cat kubelet | | | | | |
| | --no-pager | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | journalctl -xeu kubelet --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p custom-flannel-244099 | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | sudo cat | | | | | |
| | /etc/kubernetes/kubelet.conf | | | | | |
| ssh | -p custom-flannel-244099 | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | sudo cat | | | | | |
| | /var/lib/kubelet/config.yaml | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | systemctl status docker --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p custom-flannel-244099 | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | sudo systemctl cat docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | cat /etc/docker/daemon.json | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | docker system info | | | | | |
| ssh | -p false-244099 pgrep -a | false-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | kubelet | | | | | |
| ssh | -p custom-flannel-244099 sudo | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | systemctl status cri-docker | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p custom-flannel-244099 | custom-flannel-244099 | jenkins | v1.35.0 | 27 Jan 25 12:58 UTC | 27 Jan 25 12:58 UTC |
| | sudo systemctl cat cri-docker | | | | | |
| | --no-pager | | | | | |
|---------|---------------------------------|-----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/27 12:58:12
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 12:58:12.373600 750031 out.go:345] Setting OutFile to fd 1 ...
I0127 12:58:12.373934 750031 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:58:12.373947 750031 out.go:358] Setting ErrFile to fd 2...
I0127 12:58:12.373954 750031 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:58:12.374171 750031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-304536/.minikube/bin
I0127 12:58:12.375009 750031 out.go:352] Setting JSON to false
I0127 12:58:12.376646 750031 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31239,"bootTime":1737951453,"procs":475,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 12:58:12.376755 750031 start.go:139] virtualization: kvm guest
I0127 12:58:12.379012 750031 out.go:177] * [false-244099] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 12:58:12.380343 750031 notify.go:220] Checking for updates...
I0127 12:58:12.380416 750031 out.go:177] - MINIKUBE_LOCATION=20317
I0127 12:58:12.381639 750031 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 12:58:12.383050 750031 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20317-304536/kubeconfig
I0127 12:58:12.384409 750031 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-304536/.minikube
I0127 12:58:12.385817 750031 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 12:58:12.387085 750031 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 12:58:12.388874 750031 config.go:182] Loaded profile config "custom-flannel-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:58:12.389083 750031 config.go:182] Loaded profile config "default-k8s-diff-port-359066": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:58:12.389286 750031 config.go:182] Loaded profile config "offline-docker-649313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:58:12.389402 750031 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 12:58:12.418652 750031 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0127 12:58:12.418743 750031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 12:58:12.479252 750031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:74 SystemTime:2025-01-27 12:58:12.467704931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0127 12:58:12.479399 750031 docker.go:318] overlay module found
I0127 12:58:12.482107 750031 out.go:177] * Using the docker driver based on user configuration
I0127 12:58:12.483494 750031 start.go:297] selected driver: docker
I0127 12:58:12.483513 750031 start.go:901] validating driver "docker" against <nil>
I0127 12:58:12.483527 750031 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 12:58:12.484727 750031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0127 12:58:12.549249 750031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:74 SystemTime:2025-01-27 12:58:12.540403252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0127 12:58:12.549550 750031 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0127 12:58:12.549877 750031 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 12:58:12.551866 750031 out.go:177] * Using Docker driver with root privileges
I0127 12:58:12.553239 750031 cni.go:84] Creating CNI manager for "false"
I0127 12:58:12.553328 750031 start.go:340] cluster config:
{Name:false-244099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
I0127 12:58:12.554773 750031 out.go:177] * Starting "false-244099" primary control-plane node in "false-244099" cluster
I0127 12:58:12.556066 750031 cache.go:121] Beginning downloading kic base image for docker with docker
I0127 12:58:12.557346 750031 out.go:177] * Pulling base image v0.0.46 ...
I0127 12:58:12.558687 750031 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0127 12:58:12.558741 750031 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
I0127 12:58:12.558755 750031 cache.go:56] Caching tarball of preloaded images
I0127 12:58:12.558790 750031 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0127 12:58:12.558865 750031 preload.go:172] Found /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 12:58:12.558883 750031 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
I0127 12:58:12.559002 750031 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/config.json ...
I0127 12:58:12.559026 750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/config.json: {Name:mkd8d862cb70d3b3e09f1f416894d1cde8bc47e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:12.586094 750031 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0127 12:58:12.586129 750031 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0127 12:58:12.586153 750031 cache.go:227] Successfully downloaded all kic artifacts
I0127 12:58:12.586195 750031 start.go:360] acquireMachinesLock for false-244099: {Name:mkb9db4e1e07c88c0876893047ca693eae187ed3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 12:58:12.586331 750031 start.go:364] duration metric: took 112.781µs to acquireMachinesLock for "false-244099"
I0127 12:58:12.586365 750031 start.go:93] Provisioning new machine with config: &{Name:false-244099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0127 12:58:12.586474 750031 start.go:125] createHost starting for "" (driver="docker")
I0127 12:58:10.496317 740231 pod_ready.go:103] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:12.993460 740231 pod_ready.go:103] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:12.186407 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:14.687871 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:12.589384 750031 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0127 12:58:12.589712 750031 start.go:159] libmachine.API.Create for "false-244099" (driver="docker")
I0127 12:58:12.589759 750031 client.go:168] LocalClient.Create starting
I0127 12:58:12.589849 750031 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem
I0127 12:58:12.589901 750031 main.go:141] libmachine: Decoding PEM data...
I0127 12:58:12.589918 750031 main.go:141] libmachine: Parsing certificate...
I0127 12:58:12.589985 750031 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem
I0127 12:58:12.590015 750031 main.go:141] libmachine: Decoding PEM data...
I0127 12:58:12.590031 750031 main.go:141] libmachine: Parsing certificate...
I0127 12:58:12.590499 750031 cli_runner.go:164] Run: docker network inspect false-244099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 12:58:12.613828 750031 cli_runner.go:211] docker network inspect false-244099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 12:58:12.613934 750031 network_create.go:284] running [docker network inspect false-244099] to gather additional debugging logs...
I0127 12:58:12.613967 750031 cli_runner.go:164] Run: docker network inspect false-244099
W0127 12:58:12.638796 750031 cli_runner.go:211] docker network inspect false-244099 returned with exit code 1
I0127 12:58:12.638834 750031 network_create.go:287] error running [docker network inspect false-244099]: docker network inspect false-244099: exit status 1
stdout:
[]
stderr:
Error response from daemon: network false-244099 not found
I0127 12:58:12.638850 750031 network_create.go:289] output of [docker network inspect false-244099]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network false-244099 not found
** /stderr **
I0127 12:58:12.638985 750031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 12:58:12.658007 750031 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a67733940b1c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:47:92:de:9e} reservation:<nil>}
I0127 12:58:12.659136 750031 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-526e8be49203 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:00:a4:5e:8f} reservation:<nil>}
I0127 12:58:12.660548 750031 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1505344accd1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ee:63:1d:4f} reservation:<nil>}
I0127 12:58:12.661602 750031 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-85fc0d82ab8d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:d8:f9:c7:10} reservation:<nil>}
I0127 12:58:12.662532 750031 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d72310}
I0127 12:58:12.662561 750031 network_create.go:124] attempt to create docker network false-244099 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0127 12:58:12.662605 750031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-244099 false-244099
I0127 12:58:12.749517 750031 network_create.go:108] docker network false-244099 192.168.85.0/24 created
I0127 12:58:12.749558 750031 kic.go:121] calculated static IP "192.168.85.2" for the "false-244099" container
I0127 12:58:12.749619 750031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0127 12:58:12.770355 750031 cli_runner.go:164] Run: docker volume create false-244099 --label name.minikube.sigs.k8s.io=false-244099 --label created_by.minikube.sigs.k8s.io=true
I0127 12:58:12.794239 750031 oci.go:103] Successfully created a docker volume false-244099
I0127 12:58:12.794348 750031 cli_runner.go:164] Run: docker run --rm --name false-244099-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-244099 --entrypoint /usr/bin/test -v false-244099:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
I0127 12:58:13.416927 750031 oci.go:107] Successfully prepared a docker volume false-244099
I0127 12:58:13.416983 750031 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0127 12:58:13.417016 750031 kic.go:194] Starting extracting preloaded images to volume ...
I0127 12:58:13.417119 750031 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-244099:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
I0127 12:58:15.486625 740231 pod_ready.go:103] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:17.986271 740231 pod_ready.go:103] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:17.187156 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:19.687250 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:19.211341 750031 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20317-304536/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-244099:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (5.794179028s)
I0127 12:58:19.211387 750031 kic.go:203] duration metric: took 5.794367665s to extract preloaded images to volume ...
W0127 12:58:19.211522 750031 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0127 12:58:19.211657 750031 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0127 12:58:19.276256 750031 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-244099 --name false-244099 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-244099 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-244099 --network false-244099 --ip 192.168.85.2 --volume false-244099:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
I0127 12:58:19.718438 750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Running}}
I0127 12:58:19.735808 750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
I0127 12:58:19.757072 750031 cli_runner.go:164] Run: docker exec false-244099 stat /var/lib/dpkg/alternatives/iptables
I0127 12:58:19.801509 750031 oci.go:144] the created container "false-244099" has a running status.
I0127 12:58:19.801555 750031 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa...
I0127 12:58:20.478786 750031 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0127 12:58:20.507101 750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
I0127 12:58:20.526419 750031 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0127 12:58:20.526447 750031 kic_runner.go:114] Args: [docker exec --privileged false-244099 chown docker:docker /home/docker/.ssh/authorized_keys]
I0127 12:58:20.567246 750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
I0127 12:58:20.585964 750031 machine.go:93] provisionDockerMachine start ...
I0127 12:58:20.586077 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:20.604009 750031 main.go:141] libmachine: Using SSH client type: native
I0127 12:58:20.604288 750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 33129 <nil> <nil>}
I0127 12:58:20.604306 750031 main.go:141] libmachine: About to run SSH command:
hostname
I0127 12:58:20.731680 750031 main.go:141] libmachine: SSH cmd err, output: <nil>: false-244099
I0127 12:58:20.731710 750031 ubuntu.go:169] provisioning hostname "false-244099"
I0127 12:58:20.731772 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:20.751635 750031 main.go:141] libmachine: Using SSH client type: native
I0127 12:58:20.751836 750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 33129 <nil> <nil>}
I0127 12:58:20.751851 750031 main.go:141] libmachine: About to run SSH command:
sudo hostname false-244099 && echo "false-244099" | sudo tee /etc/hostname
I0127 12:58:20.897005 750031 main.go:141] libmachine: SSH cmd err, output: <nil>: false-244099
I0127 12:58:20.897088 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:20.914699 750031 main.go:141] libmachine: Using SSH client type: native
I0127 12:58:20.914918 750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 33129 <nil> <nil>}
I0127 12:58:20.914944 750031 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sfalse-244099' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-244099/g' /etc/hosts;
else
echo '127.0.1.1 false-244099' | sudo tee -a /etc/hosts;
fi
fi
I0127 12:58:21.048525 750031 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 12:58:21.048562 750031 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20317-304536/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-304536/.minikube}
I0127 12:58:21.048597 750031 ubuntu.go:177] setting up certificates
I0127 12:58:21.048609 750031 provision.go:84] configureAuth start
I0127 12:58:21.048679 750031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-244099
I0127 12:58:21.066399 750031 provision.go:143] copyHostCerts
I0127 12:58:21.066460 750031 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem, removing ...
I0127 12:58:21.066469 750031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem
I0127 12:58:21.066535 750031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/ca.pem (1082 bytes)
I0127 12:58:21.066622 750031 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem, removing ...
I0127 12:58:21.066630 750031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem
I0127 12:58:21.066653 750031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/cert.pem (1123 bytes)
I0127 12:58:21.066712 750031 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem, removing ...
I0127 12:58:21.066719 750031 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem
I0127 12:58:21.066739 750031 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-304536/.minikube/key.pem (1679 bytes)
I0127 12:58:21.066795 750031 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem org=jenkins.false-244099 san=[127.0.0.1 192.168.85.2 false-244099 localhost minikube]
I0127 12:58:21.274244 750031 provision.go:177] copyRemoteCerts
I0127 12:58:21.274314 750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 12:58:21.274352 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:21.292887 750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
I0127 12:58:21.385450 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 12:58:21.409082 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0127 12:58:21.432720 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 12:58:21.454804 750031 provision.go:87] duration metric: took 406.174669ms to configureAuth
I0127 12:58:21.454835 750031 ubuntu.go:193] setting minikube options for container-runtime
I0127 12:58:21.455033 750031 config.go:182] Loaded profile config "false-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:58:21.455095 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:21.473360 750031 main.go:141] libmachine: Using SSH client type: native
I0127 12:58:21.473634 750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 33129 <nil> <nil>}
I0127 12:58:21.473653 750031 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0127 12:58:21.604777 750031 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0127 12:58:21.604813 750031 ubuntu.go:71] root file system type: overlay
I0127 12:58:21.604956 750031 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0127 12:58:21.605028 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:21.623177 750031 main.go:141] libmachine: Using SSH client type: native
I0127 12:58:21.623384 750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 33129 <nil> <nil>}
I0127 12:58:21.623468 750031 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0127 12:58:21.763840 750031 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0127 12:58:21.763920 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:21.782240 750031 main.go:141] libmachine: Using SSH client type: native
I0127 12:58:21.782464 750031 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 127.0.0.1 33129 <nil> <nil>}
I0127 12:58:21.782494 750031 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0127 12:58:20.487145 740231 pod_ready.go:93] pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:20.487168 740231 pod_ready.go:82] duration metric: took 16.0070903s for pod "coredns-668d6bf9bc-dcvd8" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.487178 740231 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-zwmtv" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.492331 740231 pod_ready.go:93] pod "coredns-668d6bf9bc-zwmtv" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:20.492354 740231 pod_ready.go:82] duration metric: took 5.169004ms for pod "coredns-668d6bf9bc-zwmtv" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.492365 740231 pod_ready.go:79] waiting up to 15m0s for pod "etcd-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.497019 740231 pod_ready.go:93] pod "etcd-custom-flannel-244099" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:20.497051 740231 pod_ready.go:82] duration metric: took 4.679605ms for pod "etcd-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.497062 740231 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.504482 740231 pod_ready.go:93] pod "kube-apiserver-custom-flannel-244099" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:20.504507 740231 pod_ready.go:82] duration metric: took 7.43749ms for pod "kube-apiserver-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.504517 740231 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.509056 740231 pod_ready.go:93] pod "kube-controller-manager-custom-flannel-244099" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:20.509079 740231 pod_ready.go:82] duration metric: took 4.554501ms for pod "kube-controller-manager-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.509092 740231 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-h9g74" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.884757 740231 pod_ready.go:93] pod "kube-proxy-h9g74" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:20.884783 740231 pod_ready.go:82] duration metric: took 375.682953ms for pod "kube-proxy-h9g74" in "kube-system" namespace to be "Ready" ...
I0127 12:58:20.884793 740231 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:21.284670 740231 pod_ready.go:93] pod "kube-scheduler-custom-flannel-244099" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:21.284702 740231 pod_ready.go:82] duration metric: took 399.899646ms for pod "kube-scheduler-custom-flannel-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:21.284719 740231 pod_ready.go:39] duration metric: took 16.816187396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:58:21.284745 740231 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:58:21.284800 740231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:58:21.297756 740231 api_server.go:72] duration metric: took 18.507036106s to wait for apiserver process to appear ...
I0127 12:58:21.297782 740231 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:58:21.297803 740231 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
I0127 12:58:21.302222 740231 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
ok
I0127 12:58:21.303269 740231 api_server.go:141] control plane version: v1.32.1
I0127 12:58:21.303298 740231 api_server.go:131] duration metric: took 5.507067ms to wait for apiserver health ...
I0127 12:58:21.303309 740231 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:58:21.488053 740231 system_pods.go:59] 8 kube-system pods found
I0127 12:58:21.488089 740231 system_pods.go:61] "coredns-668d6bf9bc-dcvd8" [0ecb09bd-1300-40f5-a0f2-fc8ce9b0d72d] Running
I0127 12:58:21.488097 740231 system_pods.go:61] "coredns-668d6bf9bc-zwmtv" [debe5abf-11de-47ae-b7c8-4ef1e4c466c8] Running
I0127 12:58:21.488102 740231 system_pods.go:61] "etcd-custom-flannel-244099" [96e5ba57-556b-487d-9556-7f3bcf498077] Running
I0127 12:58:21.488108 740231 system_pods.go:61] "kube-apiserver-custom-flannel-244099" [1d76f029-f119-4d24-8bcf-289895a4190f] Running
I0127 12:58:21.488113 740231 system_pods.go:61] "kube-controller-manager-custom-flannel-244099" [dad5aee1-228e-413b-bddb-8c23faaa5b93] Running
I0127 12:58:21.488118 740231 system_pods.go:61] "kube-proxy-h9g74" [a304f669-d44f-4951-9683-841515701254] Running
I0127 12:58:21.488123 740231 system_pods.go:61] "kube-scheduler-custom-flannel-244099" [ab6ccb2f-51bd-4e26-9797-05fc429b8cfb] Running
I0127 12:58:21.488132 740231 system_pods.go:61] "storage-provisioner" [8a9c05d3-3553-4b7d-99cd-d2b26c5f479b] Running
I0127 12:58:21.488139 740231 system_pods.go:74] duration metric: took 184.823334ms to wait for pod list to return data ...
I0127 12:58:21.488151 740231 default_sa.go:34] waiting for default service account to be created ...
I0127 12:58:21.684822 740231 default_sa.go:45] found service account: "default"
I0127 12:58:21.684859 740231 default_sa.go:55] duration metric: took 196.697507ms for default service account to be created ...
I0127 12:58:21.684870 740231 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 12:58:21.886093 740231 system_pods.go:87] 8 kube-system pods found
I0127 12:58:22.498456 750031 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-12-17 15:44:19.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-01-27 12:58:21.759601475 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0127 12:58:22.498494 750031 machine.go:96] duration metric: took 1.912508753s to provisionDockerMachine
I0127 12:58:22.498510 750031 client.go:171] duration metric: took 9.908736816s to LocalClient.Create
I0127 12:58:22.498533 750031 start.go:167] duration metric: took 9.908823652s to libmachine.API.Create "false-244099"
I0127 12:58:22.498556 750031 start.go:293] postStartSetup for "false-244099" (driver="docker")
I0127 12:58:22.498572 750031 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 12:58:22.498638 750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 12:58:22.498681 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:22.515870 750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
I0127 12:58:22.609480 750031 ssh_runner.go:195] Run: cat /etc/os-release
I0127 12:58:22.612645 750031 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0127 12:58:22.612679 750031 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0127 12:58:22.612690 750031 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0127 12:58:22.612697 750031 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0127 12:58:22.612707 750031 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-304536/.minikube/addons for local assets ...
I0127 12:58:22.612753 750031 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-304536/.minikube/files for local assets ...
I0127 12:58:22.612841 750031 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem -> 3113072.pem in /etc/ssl/certs
I0127 12:58:22.612942 750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 12:58:22.621039 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem --> /etc/ssl/certs/3113072.pem (1708 bytes)
I0127 12:58:22.643937 750031 start.go:296] duration metric: took 145.364237ms for postStartSetup
I0127 12:58:22.644297 750031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-244099
I0127 12:58:22.662219 750031 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/config.json ...
I0127 12:58:22.662583 750031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0127 12:58:22.662644 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:22.680108 750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
I0127 12:58:22.773413 750031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0127 12:58:22.778139 750031 start.go:128] duration metric: took 10.191645506s to createHost
I0127 12:58:22.778170 750031 start.go:83] releasing machines lock for "false-244099", held for 10.191826053s
I0127 12:58:22.778240 750031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-244099
I0127 12:58:22.795411 750031 ssh_runner.go:195] Run: cat /version.json
I0127 12:58:22.795481 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:22.795496 750031 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 12:58:22.795576 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:22.816131 750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
I0127 12:58:22.816965 750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
I0127 12:58:22.982023 750031 ssh_runner.go:195] Run: systemctl --version
I0127 12:58:22.987029 750031 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0127 12:58:22.991988 750031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0127 12:58:23.015716 750031 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0127 12:58:23.015787 750031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0127 12:58:23.032993 750031 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0127 12:58:23.048931 750031 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 12:58:23.048965 750031 start.go:495] detecting cgroup driver to use...
I0127 12:58:23.049003 750031 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0127 12:58:23.049132 750031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 12:58:23.064383 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 12:58:23.073883 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 12:58:23.083270 750031 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 12:58:23.083332 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 12:58:23.092463 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 12:58:23.101771 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 12:58:23.110798 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 12:58:23.119422 750031 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 12:58:23.128006 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 12:58:23.136736 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 12:58:23.145774 750031 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 12:58:23.155256 750031 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 12:58:23.164790 750031 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 12:58:23.174073 750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:58:23.261825 750031 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 12:58:23.355966 750031 start.go:495] detecting cgroup driver to use...
I0127 12:58:23.356032 750031 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0127 12:58:23.356086 750031 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0127 12:58:23.367622 750031 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0127 12:58:23.367696 750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 12:58:23.380037 750031 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 12:58:23.397443 750031 ssh_runner.go:195] Run: which cri-dockerd
I0127 12:58:23.401445 750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0127 12:58:23.410898 750031 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0127 12:58:23.429094 750031 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0127 12:58:23.516773 750031 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0127 12:58:23.613156 750031 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0127 12:58:23.613307 750031 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0127 12:58:23.632098 750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:58:23.732361 750031 ssh_runner.go:195] Run: sudo systemctl restart docker
I0127 12:58:24.011398 750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0127 12:58:24.022879 750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0127 12:58:24.034582 750031 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0127 12:58:24.117884 750031 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0127 12:58:24.204322 750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:58:24.279716 750031 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0127 12:58:24.293486 750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0127 12:58:24.303933 750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:58:24.381228 750031 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0127 12:58:24.441910 750031 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0127 12:58:24.441985 750031 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0127 12:58:24.445897 750031 start.go:563] Will wait 60s for crictl version
I0127 12:58:24.445955 750031 ssh_runner.go:195] Run: which crictl
I0127 12:58:24.449250 750031 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 12:58:24.485587 750031 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.4.1
RuntimeApiVersion: v1
I0127 12:58:24.485659 750031 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0127 12:58:24.510886 750031 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0127 12:58:22.186285 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:24.685208 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:26.084734 740231 system_pods.go:105] "coredns-668d6bf9bc-dcvd8" [0ecb09bd-1300-40f5-a0f2-fc8ce9b0d72d] Running
I0127 12:58:26.084758 740231 system_pods.go:105] "coredns-668d6bf9bc-zwmtv" [debe5abf-11de-47ae-b7c8-4ef1e4c466c8] Running
I0127 12:58:26.084764 740231 system_pods.go:105] "etcd-custom-flannel-244099" [96e5ba57-556b-487d-9556-7f3bcf498077] Running
I0127 12:58:26.084772 740231 system_pods.go:105] "kube-apiserver-custom-flannel-244099" [1d76f029-f119-4d24-8bcf-289895a4190f] Running
I0127 12:58:26.084777 740231 system_pods.go:105] "kube-controller-manager-custom-flannel-244099" [dad5aee1-228e-413b-bddb-8c23faaa5b93] Running
I0127 12:58:26.084782 740231 system_pods.go:105] "kube-proxy-h9g74" [a304f669-d44f-4951-9683-841515701254] Running
I0127 12:58:26.084786 740231 system_pods.go:105] "kube-scheduler-custom-flannel-244099" [ab6ccb2f-51bd-4e26-9797-05fc429b8cfb] Running
I0127 12:58:26.084795 740231 system_pods.go:105] "storage-provisioner" [8a9c05d3-3553-4b7d-99cd-d2b26c5f479b] Running
I0127 12:58:26.084804 740231 system_pods.go:147] duration metric: took 4.399927637s to wait for k8s-apps to be running ...
I0127 12:58:26.084814 740231 system_svc.go:44] waiting for kubelet service to be running ....
I0127 12:58:26.084869 740231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 12:58:26.096369 740231 system_svc.go:56] duration metric: took 11.541041ms WaitForService to wait for kubelet
I0127 12:58:26.096403 740231 kubeadm.go:582] duration metric: took 23.305687653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 12:58:26.096427 740231 node_conditions.go:102] verifying NodePressure condition ...
I0127 12:58:26.285200 740231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0127 12:58:26.285229 740231 node_conditions.go:123] node cpu capacity is 8
I0127 12:58:26.285242 740231 node_conditions.go:105] duration metric: took 188.809537ms to run NodePressure ...
I0127 12:58:26.285256 740231 start.go:241] waiting for startup goroutines ...
I0127 12:58:26.285262 740231 start.go:246] waiting for cluster config update ...
I0127 12:58:26.285273 740231 start.go:255] writing updated cluster config ...
I0127 12:58:26.285524 740231 ssh_runner.go:195] Run: rm -f paused
I0127 12:58:26.351226 740231 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
I0127 12:58:26.353171 740231 out.go:177] * Done! kubectl is now configured to use "custom-flannel-244099" cluster and "default" namespace by default
I0127 12:58:24.536949 750031 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.1 ...
I0127 12:58:24.537056 750031 cli_runner.go:164] Run: docker network inspect false-244099 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 12:58:24.554832 750031 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0127 12:58:24.558856 750031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 12:58:24.570461 750031 kubeadm.go:883] updating cluster {Name:false-244099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 12:58:24.570644 750031 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0127 12:58:24.570720 750031 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 12:58:24.591944 750031 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0127 12:58:24.591970 750031 docker.go:619] Images already preloaded, skipping extraction
I0127 12:58:24.592040 750031 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 12:58:24.611987 750031 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0127 12:58:24.612013 750031 cache_images.go:84] Images are preloaded, skipping loading
I0127 12:58:24.612025 750031 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.1 docker true true} ...
I0127 12:58:24.612135 750031 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=false-244099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false}
I0127 12:58:24.612241 750031 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0127 12:58:24.659255 750031 cni.go:84] Creating CNI manager for "false"
I0127 12:58:24.659282 750031 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 12:58:24.659309 750031 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-244099 NodeName:false-244099 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 12:58:24.659473 750031 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "false-244099"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 12:58:24.659549 750031 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 12:58:24.668405 750031 binaries.go:44] Found k8s binaries, skipping transfer
I0127 12:58:24.668482 750031 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 12:58:24.676948 750031 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
I0127 12:58:24.695057 750031 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 12:58:24.712046 750031 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
I0127 12:58:24.728932 750031 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0127 12:58:24.732265 750031 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 12:58:24.742765 750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:58:24.819875 750031 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:58:24.834922 750031 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099 for IP: 192.168.85.2
I0127 12:58:24.834955 750031 certs.go:194] generating shared ca certs ...
I0127 12:58:24.834976 750031 certs.go:226] acquiring lock for ca certs: {Name:mk1b16f74c226e2be2c446b7baf1d60d1399508e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:24.835154 750031 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-304536/.minikube/ca.key
I0127 12:58:24.835208 750031 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.key
I0127 12:58:24.835221 750031 certs.go:256] generating profile certs ...
I0127 12:58:24.835295 750031 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.key
I0127 12:58:24.835309 750031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.crt with IP's: []
I0127 12:58:25.013234 750031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.crt ...
I0127 12:58:25.013266 750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.crt: {Name:mk544c6a47de60ea9e6a96fd2e1af83ec1cc26a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:25.013421 750031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.key ...
I0127 12:58:25.013433 750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/client.key: {Name:mkc00ce2454a82942bdf7bf29fc5994084688abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:25.013514 750031 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key.73220eec
I0127 12:58:25.013530 750031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt.73220eec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0127 12:58:25.162552 750031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt.73220eec ...
I0127 12:58:25.162586 750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt.73220eec: {Name:mkf9f58cd13379161838b1820651898ad35d112f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:25.162732 750031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key.73220eec ...
I0127 12:58:25.162745 750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key.73220eec: {Name:mkf35a641c1ea4b2cc8d3b70daf636c12a652f0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:25.162819 750031 certs.go:381] copying /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt.73220eec -> /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt
I0127 12:58:25.162890 750031 certs.go:385] copying /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key.73220eec -> /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key
I0127 12:58:25.162942 750031 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.key
I0127 12:58:25.162958 750031 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.crt with IP's: []
I0127 12:58:25.422535 750031 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.crt ...
I0127 12:58:25.422566 750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.crt: {Name:mk526bb37f61cb3704a3adee539c0168555157d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:25.422764 750031 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.key ...
I0127 12:58:25.422779 750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.key: {Name:mke9aa09b12da50c2769bf84c9672eed2459f066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:25.423003 750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307.pem (1338 bytes)
W0127 12:58:25.423050 750031 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307_empty.pem, impossibly tiny 0 bytes
I0127 12:58:25.423064 750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca-key.pem (1679 bytes)
I0127 12:58:25.423096 750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/ca.pem (1082 bytes)
I0127 12:58:25.423129 750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/cert.pem (1123 bytes)
I0127 12:58:25.423163 750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/certs/key.pem (1679 bytes)
I0127 12:58:25.423216 750031 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem (1708 bytes)
I0127 12:58:25.423868 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 12:58:25.448061 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0127 12:58:25.471765 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 12:58:25.494534 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 12:58:25.516538 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0127 12:58:25.537994 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0127 12:58:25.560635 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 12:58:25.583314 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/profiles/false-244099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 12:58:25.605393 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 12:58:25.627808 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/certs/311307.pem --> /usr/share/ca-certificates/311307.pem (1338 bytes)
I0127 12:58:25.650341 750031 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-304536/.minikube/files/etc/ssl/certs/3113072.pem --> /usr/share/ca-certificates/3113072.pem (1708 bytes)
I0127 12:58:25.673253 750031 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 12:58:25.691006 750031 ssh_runner.go:195] Run: openssl version
I0127 12:58:25.696267 750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/311307.pem && ln -fs /usr/share/ca-certificates/311307.pem /etc/ssl/certs/311307.pem"
I0127 12:58:25.705145 750031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/311307.pem
I0127 12:58:25.708538 750031 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:14 /usr/share/ca-certificates/311307.pem
I0127 12:58:25.708590 750031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/311307.pem
I0127 12:58:25.714871 750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/311307.pem /etc/ssl/certs/51391683.0"
I0127 12:58:25.723498 750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3113072.pem && ln -fs /usr/share/ca-certificates/3113072.pem /etc/ssl/certs/3113072.pem"
I0127 12:58:25.732043 750031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3113072.pem
I0127 12:58:25.735439 750031 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:14 /usr/share/ca-certificates/3113072.pem
I0127 12:58:25.735491 750031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3113072.pem
I0127 12:58:25.742393 750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3113072.pem /etc/ssl/certs/3ec20f2e.0"
I0127 12:58:25.751687 750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 12:58:25.761320 750031 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 12:58:25.764709 750031 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:09 /usr/share/ca-certificates/minikubeCA.pem
I0127 12:58:25.764768 750031 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 12:58:25.771323 750031 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 12:58:25.780490 750031 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 12:58:25.783587 750031 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0127 12:58:25.783645 750031 kubeadm.go:392] StartCluster: {Name:false-244099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:false-244099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 12:58:25.783754 750031 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0127 12:58:25.802365 750031 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 12:58:25.810991 750031 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 12:58:25.819331 750031 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0127 12:58:25.819397 750031 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 12:58:25.827909 750031 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 12:58:25.827932 750031 kubeadm.go:157] found existing configuration files:
I0127 12:58:25.827986 750031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 12:58:25.836213 750031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 12:58:25.836278 750031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 12:58:25.844083 750031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 12:58:25.852072 750031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 12:58:25.852138 750031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 12:58:25.859840 750031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 12:58:25.867738 750031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 12:58:25.867808 750031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 12:58:25.877362 750031 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 12:58:25.885877 750031 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 12:58:25.885947 750031 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 12:58:25.893835 750031 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0127 12:58:25.954066 750031 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0127 12:58:25.954338 750031 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1074-gcp\n", err: exit status 1
I0127 12:58:26.011081 750031 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 12:58:26.686195 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:28.686293 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:31.185152 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:33.686081 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:35.591620 750031 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 12:58:35.591675 750031 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 12:58:35.591751 750031 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0127 12:58:35.591799 750031 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1074-gcp[0m
I0127 12:58:35.591834 750031 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0127 12:58:35.591875 750031 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0127 12:58:35.591967 750031 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0127 12:58:35.592061 750031 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0127 12:58:35.592147 750031 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0127 12:58:35.592261 750031 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0127 12:58:35.592345 750031 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0127 12:58:35.592417 750031 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0127 12:58:35.592482 750031 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0127 12:58:35.592549 750031 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0127 12:58:35.592644 750031 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 12:58:35.592810 750031 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 12:58:35.592964 750031 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 12:58:35.593040 750031 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 12:58:35.594677 750031 out.go:235] - Generating certificates and keys ...
I0127 12:58:35.594758 750031 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 12:58:35.594820 750031 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 12:58:35.594904 750031 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0127 12:58:35.594968 750031 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0127 12:58:35.595026 750031 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0127 12:58:35.595077 750031 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0127 12:58:35.595123 750031 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0127 12:58:35.595218 750031 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [false-244099 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0127 12:58:35.595281 750031 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0127 12:58:35.595393 750031 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [false-244099 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0127 12:58:35.595453 750031 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0127 12:58:35.595508 750031 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0127 12:58:35.595546 750031 kubeadm.go:310] [certs] Generating "sa" key and public key
I0127 12:58:35.595599 750031 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 12:58:35.595657 750031 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 12:58:35.595730 750031 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 12:58:35.595800 750031 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 12:58:35.595854 750031 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 12:58:35.595903 750031 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 12:58:35.595976 750031 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 12:58:35.596056 750031 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 12:58:35.597357 750031 out.go:235] - Booting up control plane ...
I0127 12:58:35.597454 750031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 12:58:35.597524 750031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 12:58:35.597586 750031 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 12:58:35.597726 750031 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 12:58:35.597835 750031 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 12:58:35.597881 750031 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 12:58:35.597991 750031 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 12:58:35.598092 750031 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 12:58:35.598145 750031 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001855835s
I0127 12:58:35.598215 750031 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 12:58:35.598266 750031 kubeadm.go:310] [api-check] The API server is healthy after 4.502145282s
I0127 12:58:35.598362 750031 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 12:58:35.598465 750031 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 12:58:35.598514 750031 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 12:58:35.598694 750031 kubeadm.go:310] [mark-control-plane] Marking the node false-244099 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 12:58:35.598743 750031 kubeadm.go:310] [bootstrap-token] Using token: 01hxvb.iq5wg8lj60p8tw9k
I0127 12:58:35.600030 750031 out.go:235] - Configuring RBAC rules ...
I0127 12:58:35.600144 750031 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 12:58:35.600273 750031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 12:58:35.600440 750031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 12:58:35.600635 750031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 12:58:35.600736 750031 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 12:58:35.600815 750031 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 12:58:35.600933 750031 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 12:58:35.601000 750031 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 12:58:35.601061 750031 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 12:58:35.601070 750031 kubeadm.go:310]
I0127 12:58:35.601148 750031 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 12:58:35.601162 750031 kubeadm.go:310]
I0127 12:58:35.601281 750031 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 12:58:35.601291 750031 kubeadm.go:310]
I0127 12:58:35.601328 750031 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 12:58:35.601411 750031 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 12:58:35.601473 750031 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 12:58:35.601483 750031 kubeadm.go:310]
I0127 12:58:35.601565 750031 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 12:58:35.601578 750031 kubeadm.go:310]
I0127 12:58:35.601649 750031 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 12:58:35.601659 750031 kubeadm.go:310]
I0127 12:58:35.601740 750031 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 12:58:35.601867 750031 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 12:58:35.601949 750031 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 12:58:35.601957 750031 kubeadm.go:310]
I0127 12:58:35.602029 750031 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 12:58:35.602111 750031 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 12:58:35.602120 750031 kubeadm.go:310]
I0127 12:58:35.602195 750031 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 01hxvb.iq5wg8lj60p8tw9k \
I0127 12:58:35.602287 750031 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0317d02b8a760fcff4e86e4d275bff52eb4bb604f5db424953dcbe540e77a46a \
I0127 12:58:35.602311 750031 kubeadm.go:310] --control-plane
I0127 12:58:35.602318 750031 kubeadm.go:310]
I0127 12:58:35.602400 750031 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 12:58:35.602410 750031 kubeadm.go:310]
I0127 12:58:35.602492 750031 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 01hxvb.iq5wg8lj60p8tw9k \
I0127 12:58:35.602635 750031 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0317d02b8a760fcff4e86e4d275bff52eb4bb604f5db424953dcbe540e77a46a
I0127 12:58:35.602650 750031 cni.go:84] Creating CNI manager for "false"
I0127 12:58:35.602690 750031 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 12:58:35.602733 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:35.602808 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes false-244099 minikube.k8s.io/updated_at=2025_01_27T12_58_35_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=false-244099 minikube.k8s.io/primary=true
I0127 12:58:35.695011 750031 ops.go:34] apiserver oom_adj: -16
I0127 12:58:35.695137 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:36.196124 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:36.695547 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:37.196313 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:37.696289 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:38.195609 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:38.695313 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:39.195362 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:39.696012 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:40.195224 750031 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 12:58:40.279233 750031 kubeadm.go:1113] duration metric: took 4.676541035s to wait for elevateKubeSystemPrivileges
I0127 12:58:40.279276 750031 kubeadm.go:394] duration metric: took 14.495635113s to StartCluster
I0127 12:58:40.279301 750031 settings.go:142] acquiring lock: {Name:mk55dbc0704f2f9d31c80856a45552242884623b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:40.279373 750031 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20317-304536/kubeconfig
I0127 12:58:40.280840 750031 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-304536/kubeconfig: {Name:mk59d9102d1fe380f0fe65cd8c2acffe42bba157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 12:58:40.281070 750031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0127 12:58:40.281074 750031 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0127 12:58:40.281151 750031 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 12:58:40.281253 750031 addons.go:69] Setting storage-provisioner=true in profile "false-244099"
I0127 12:58:40.281276 750031 addons.go:238] Setting addon storage-provisioner=true in "false-244099"
I0127 12:58:40.281298 750031 config.go:182] Loaded profile config "false-244099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 12:58:40.281315 750031 host.go:66] Checking if "false-244099" exists ...
I0127 12:58:40.281362 750031 addons.go:69] Setting default-storageclass=true in profile "false-244099"
I0127 12:58:40.281377 750031 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-244099"
I0127 12:58:40.281705 750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
I0127 12:58:40.281900 750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
I0127 12:58:40.283565 750031 out.go:177] * Verifying Kubernetes components...
I0127 12:58:40.284837 750031 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 12:58:40.309240 750031 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 12:58:36.184125 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:38.185048 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:40.185246 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:40.309747 750031 addons.go:238] Setting addon default-storageclass=true in "false-244099"
I0127 12:58:40.309794 750031 host.go:66] Checking if "false-244099" exists ...
I0127 12:58:40.310308 750031 cli_runner.go:164] Run: docker container inspect false-244099 --format={{.State.Status}}
I0127 12:58:40.310704 750031 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:58:40.310723 750031 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 12:58:40.310763 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:40.332342 750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
I0127 12:58:40.336244 750031 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 12:58:40.336270 750031 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 12:58:40.336339 750031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-244099
I0127 12:58:40.355344 750031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/20317-304536/.minikube/machines/false-244099/id_rsa Username:docker}
I0127 12:58:40.493563 750031 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0127 12:58:40.583563 750031 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 12:58:40.595140 750031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 12:58:40.689195 750031 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 12:58:41.264907 750031 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
I0127 12:58:41.266822 750031 node_ready.go:35] waiting up to 15m0s for node "false-244099" to be "Ready" ...
I0127 12:58:41.281697 750031 node_ready.go:49] node "false-244099" has status "Ready":"True"
I0127 12:58:41.281795 750031 node_ready.go:38] duration metric: took 14.938882ms for node "false-244099" to be "Ready" ...
I0127 12:58:41.282209 750031 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:58:41.294528 750031 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-bpx65" in "kube-system" namespace to be "Ready" ...
I0127 12:58:41.769586 750031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174404511s)
I0127 12:58:41.769687 750031 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.080453167s)
I0127 12:58:41.771514 750031 kapi.go:214] "coredns" deployment in "kube-system" namespace and "false-244099" context rescaled to 1 replicas
I0127 12:58:41.781145 750031 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0127 12:58:41.782295 750031 addons.go:514] duration metric: took 1.501154845s for enable addons: enabled=[storage-provisioner default-storageclass]
I0127 12:58:42.185658 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:44.685940 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:42.801265 750031 pod_ready.go:93] pod "coredns-668d6bf9bc-bpx65" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:42.801300 750031 pod_ready.go:82] duration metric: took 1.506684938s for pod "coredns-668d6bf9bc-bpx65" in "kube-system" namespace to be "Ready" ...
I0127 12:58:42.801322 750031 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-tns2z" in "kube-system" namespace to be "Ready" ...
I0127 12:58:44.307614 750031 pod_ready.go:93] pod "coredns-668d6bf9bc-tns2z" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:44.307641 750031 pod_ready.go:82] duration metric: took 1.506312016s for pod "coredns-668d6bf9bc-tns2z" in "kube-system" namespace to be "Ready" ...
I0127 12:58:44.307664 750031 pod_ready.go:79] waiting up to 15m0s for pod "etcd-false-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:44.311740 750031 pod_ready.go:93] pod "etcd-false-244099" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:44.311760 750031 pod_ready.go:82] duration metric: took 4.087309ms for pod "etcd-false-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:44.311768 750031 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-false-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:46.318625 750031 pod_ready.go:93] pod "kube-apiserver-false-244099" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:46.318651 750031 pod_ready.go:82] duration metric: took 2.006874406s for pod "kube-apiserver-false-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:46.318662 750031 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-false-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:48.325340 750031 pod_ready.go:103] pod "kube-controller-manager-false-244099" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:49.323762 750031 pod_ready.go:93] pod "kube-controller-manager-false-244099" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:49.323791 750031 pod_ready.go:82] duration metric: took 3.005116945s for pod "kube-controller-manager-false-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:49.323802 750031 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-95qsw" in "kube-system" namespace to be "Ready" ...
I0127 12:58:49.327986 750031 pod_ready.go:93] pod "kube-proxy-95qsw" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:49.328008 750031 pod_ready.go:82] duration metric: took 4.200296ms for pod "kube-proxy-95qsw" in "kube-system" namespace to be "Ready" ...
I0127 12:58:49.328018 750031 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-false-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:49.332000 750031 pod_ready.go:93] pod "kube-scheduler-false-244099" in "kube-system" namespace has status "Ready":"True"
I0127 12:58:49.332023 750031 pod_ready.go:82] duration metric: took 3.99765ms for pod "kube-scheduler-false-244099" in "kube-system" namespace to be "Ready" ...
I0127 12:58:49.332031 750031 pod_ready.go:39] duration metric: took 8.049711508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 12:58:49.332055 750031 api_server.go:52] waiting for apiserver process to appear ...
I0127 12:58:49.332126 750031 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 12:58:49.344458 750031 api_server.go:72] duration metric: took 9.063347304s to wait for apiserver process to appear ...
I0127 12:58:49.344488 750031 api_server.go:88] waiting for apiserver healthz status ...
I0127 12:58:49.344512 750031 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0127 12:58:49.349202 750031 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0127 12:58:49.350191 750031 api_server.go:141] control plane version: v1.32.1
I0127 12:58:49.350215 750031 api_server.go:131] duration metric: took 5.720252ms to wait for apiserver health ...
I0127 12:58:49.350223 750031 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 12:58:49.355004 750031 system_pods.go:59] 7 kube-system pods found
I0127 12:58:49.355036 750031 system_pods.go:61] "coredns-668d6bf9bc-tns2z" [cca8ac17-e37f-4929-ba8c-a864654b2f09] Running
I0127 12:58:49.355044 750031 system_pods.go:61] "etcd-false-244099" [5810d0f0-dda9-4cb1-a159-b7e0838dbd0d] Running
I0127 12:58:49.355048 750031 system_pods.go:61] "kube-apiserver-false-244099" [e1d41a3f-17cf-4428-b4f0-b9da38901a34] Running
I0127 12:58:49.355054 750031 system_pods.go:61] "kube-controller-manager-false-244099" [7eea3b17-ca32-4a4c-91be-58f7b94ed885] Running
I0127 12:58:49.355060 750031 system_pods.go:61] "kube-proxy-95qsw" [f78299cf-1d12-4da6-a21f-e8316e43af1a] Running
I0127 12:58:49.355065 750031 system_pods.go:61] "kube-scheduler-false-244099" [d421150b-4093-41a9-9add-4858bddf30fe] Running
I0127 12:58:49.355072 750031 system_pods.go:61] "storage-provisioner" [9339841d-38e8-4d6b-b60e-a69edec0b104] Running
I0127 12:58:49.355086 750031 system_pods.go:74] duration metric: took 4.856318ms to wait for pod list to return data ...
I0127 12:58:49.355096 750031 default_sa.go:34] waiting for default service account to be created ...
I0127 12:58:49.357905 750031 default_sa.go:45] found service account: "default"
I0127 12:58:49.357928 750031 default_sa.go:55] duration metric: took 2.823978ms for default service account to be created ...
I0127 12:58:49.357938 750031 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 12:58:49.473041 750031 system_pods.go:87] 7 kube-system pods found
I0127 12:58:49.671011 750031 system_pods.go:105] "coredns-668d6bf9bc-tns2z" [cca8ac17-e37f-4929-ba8c-a864654b2f09] Running
I0127 12:58:49.671050 750031 system_pods.go:105] "etcd-false-244099" [5810d0f0-dda9-4cb1-a159-b7e0838dbd0d] Running
I0127 12:58:49.671058 750031 system_pods.go:105] "kube-apiserver-false-244099" [e1d41a3f-17cf-4428-b4f0-b9da38901a34] Running
I0127 12:58:49.671065 750031 system_pods.go:105] "kube-controller-manager-false-244099" [7eea3b17-ca32-4a4c-91be-58f7b94ed885] Running
I0127 12:58:49.671071 750031 system_pods.go:105] "kube-proxy-95qsw" [f78299cf-1d12-4da6-a21f-e8316e43af1a] Running
I0127 12:58:49.671077 750031 system_pods.go:105] "kube-scheduler-false-244099" [d421150b-4093-41a9-9add-4858bddf30fe] Running
I0127 12:58:49.671083 750031 system_pods.go:105] "storage-provisioner" [9339841d-38e8-4d6b-b60e-a69edec0b104] Running
I0127 12:58:49.671093 750031 system_pods.go:147] duration metric: took 313.147642ms to wait for k8s-apps to be running ...
I0127 12:58:49.671107 750031 system_svc.go:44] waiting for kubelet service to be running ....
I0127 12:58:49.671167 750031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 12:58:49.685122 750031 system_svc.go:56] duration metric: took 14.004286ms WaitForService to wait for kubelet
I0127 12:58:49.685155 750031 kubeadm.go:582] duration metric: took 9.404048085s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 12:58:49.685178 750031 node_conditions.go:102] verifying NodePressure condition ...
I0127 12:58:49.870998 750031 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0127 12:58:49.871025 750031 node_conditions.go:123] node cpu capacity is 8
I0127 12:58:49.871038 750031 node_conditions.go:105] duration metric: took 185.85445ms to run NodePressure ...
I0127 12:58:49.871050 750031 start.go:241] waiting for startup goroutines ...
I0127 12:58:49.871058 750031 start.go:246] waiting for cluster config update ...
I0127 12:58:49.871072 750031 start.go:255] writing updated cluster config ...
I0127 12:58:49.871364 750031 ssh_runner.go:195] Run: rm -f paused
I0127 12:58:49.923741 750031 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
I0127 12:58:49.925599 750031 out.go:177] * Done! kubectl is now configured to use "false-244099" cluster and "default" namespace by default
I0127 12:58:46.686588 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
I0127 12:58:49.185623 714706 pod_ready.go:103] pod "metrics-server-f79f97bbb-5v78h" in "kube-system" namespace has status "Ready":"False"
==> Docker <==
Jan 27 12:44:07 offline-docker-649313 dockerd[1364]: time="2025-01-27T12:44:07.420092961Z" level=info msg="API listen on /var/run/docker.sock"
Jan 27 12:44:07 offline-docker-649313 dockerd[1364]: time="2025-01-27T12:44:07.420272119Z" level=info msg="API listen on [::]:2376"
Jan 27 12:44:07 offline-docker-649313 systemd[1]: Started Docker Application Container Engine.
Jan 27 12:44:07 offline-docker-649313 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Starting cri-dockerd dev (HEAD)"
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Start docker client with request timeout 0s"
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Hairpin mode is set to hairpin-veth"
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Loaded network plugin cni"
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Docker cri networking managed by network plugin cni"
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Setting cgroupDriver cgroupfs"
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
Jan 27 12:44:07 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:07Z" level=info msg="Start cri-dockerd grpc backend"
Jan 27 12:44:07 offline-docker-649313 systemd[1]: Started CRI Interface for Docker Application Container Engine.
Jan 27 12:44:17 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/274ceb1d778af04def78de0fa10867100c2effad7ac3195db386a35b283abb58/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Jan 27 12:44:17 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/24edc75f83d95672f93b41e3b800ff4db1ccf9f9e9934545ed3871063f654fcd/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Jan 27 12:44:17 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee05ddccec7f3a0010e81a31ead44b5a551efb4dbc61388c82054141c2f0fa5d/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Jan 27 12:44:17 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b3d5f35fa6601d2db883f5118f222ff18f9e251801e97e2fa20b695c4289f942/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Jan 27 12:44:29 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6784fe75024c94477de9c9dcddf350673b727ec274233c2977c05c966e42d22b/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Jan 27 12:44:29 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d149963c0f5417a4bfd7ff76fba93e74bcbe5c8567fe8c7e92dfc73f237f629/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Jan 27 12:44:29 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0e922fc5b50a5e3f2fbbdf479a25e30299c64aab3e00b7640846d5596550e0eb/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
Jan 27 12:44:29 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/27063302b3d83e8d66a8a55a42c513af7780289d2ef43b6a9aa3f55dce157d3f/resolv.conf as [nameserver 192.168.76.1 search us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
Jan 27 12:44:33 offline-docker-649313 cri-dockerd[1636]: time="2025-01-27T12:44:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Jan 27 12:44:59 offline-docker-649313 dockerd[1364]: time="2025-01-27T12:44:59.723655025Z" level=info msg="ignoring event" container=d47febb25c7020fe2e70988c4383aadc026bfb71145087b6df8688601199e639 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
c9b8b4ed10388 6e38f40d628db 13 minutes ago Running storage-provisioner 1 27063302b3d83 storage-provisioner
d47febb25c702 6e38f40d628db 14 minutes ago Exited storage-provisioner 0 27063302b3d83 storage-provisioner
77059df7fccb1 c69fa2e9cbf5f 14 minutes ago Running coredns 0 0e922fc5b50a5 coredns-668d6bf9bc-6nkx4
ad7d572863d3f c69fa2e9cbf5f 14 minutes ago Running coredns 0 9d149963c0f54 coredns-668d6bf9bc-7rv77
a971d29cc752a e29f9c7391fd9 14 minutes ago Running kube-proxy 0 6784fe75024c9 kube-proxy-nwtdt
f79d8bf90123d a9e7e6b294baf 14 minutes ago Running etcd 0 b3d5f35fa6601 etcd-offline-docker-649313
d4f44e36ec71d 95c0bda56fc4d 14 minutes ago Running kube-apiserver 0 24edc75f83d95 kube-apiserver-offline-docker-649313
6e1305f891a45 2b0d6572d062c 14 minutes ago Running kube-scheduler 0 ee05ddccec7f3 kube-scheduler-offline-docker-649313
1451ed12e7f6e 019ee182b58e2 14 minutes ago Running kube-controller-manager 0 274ceb1d778af kube-controller-manager-offline-docker-649313
==> coredns [77059df7fccb] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] 127.0.0.1:52696 - 18710 "HINFO IN 7084254789387126238.3589103866241268019. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007717295s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[976123763]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.581) (total time: 30000ms):
Trace[976123763]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.582)
Trace[976123763]: [30.000823065s] [30.000823065s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[1560918851]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.581) (total time: 30001ms):
Trace[1560918851]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.582)
Trace[1560918851]: [30.001006858s] [30.001006858s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[441518432]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.581) (total time: 30001ms):
Trace[441518432]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.582)
Trace[441518432]: [30.00109415s] [30.00109415s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [ad7d572863d3] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
[INFO] 127.0.0.1:40550 - 6146 "HINFO IN 1544229250248001749.8501780058627845564. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009925252s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[228167203]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.579) (total time: 30000ms):
Trace[228167203]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.579)
Trace[228167203]: [30.000886848s] [30.000886848s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/kubernetes: Trace[486211484]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.579) (total time: 30000ms):
Trace[486211484]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.579)
Trace[486211484]: [30.000853603s] [30.000853603s] END
[INFO] plugin/kubernetes: Trace[1023521096]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 12:44:29.579) (total time: 30001ms):
Trace[1023521096]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (12:44:59.579)
Trace[1023521096]: [30.001046101s] [30.001046101s] END
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: offline-docker-649313
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=offline-docker-649313
kubernetes.io/os=linux
minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
minikube.k8s.io/name=offline-docker-649313
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_27T12_44_23_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 27 Jan 2025 12:44:20 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: offline-docker-649313
AcquireTime: <unset>
RenewTime: Mon, 27 Jan 2025 12:58:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 27 Jan 2025 12:53:53 +0000 Mon, 27 Jan 2025 12:44:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 27 Jan 2025 12:53:53 +0000 Mon, 27 Jan 2025 12:44:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 27 Jan 2025 12:53:53 +0000 Mon, 27 Jan 2025 12:44:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 27 Jan 2025 12:53:53 +0000 Mon, 27 Jan 2025 12:44:20 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: offline-docker-649313
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859372Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859372Ki
pods: 110
System Info:
Machine ID: f2e68375a36b4ab39e70e74b0bae1ce9
System UUID: fdbaac16-a4b6-4b1a-ad65-83886decab7b
Boot ID: bc9990d9-5982-4f92-9b4e-1af016df98ed
Kernel Version: 5.15.0-1074-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.4.1
Kubelet Version: v1.32.1
Kube-Proxy Version: v1.32.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-6nkx4 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 14m
kube-system coredns-668d6bf9bc-7rv77 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 14m
kube-system etcd-offline-docker-649313 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 14m
kube-system kube-apiserver-offline-docker-649313 250m (3%) 0 (0%) 0 (0%) 0 (0%) 14m
kube-system kube-controller-manager-offline-docker-649313 200m (2%) 0 (0%) 0 (0%) 0 (0%) 14m
kube-system kube-proxy-nwtdt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14m
kube-system kube-scheduler-offline-docker-649313 100m (1%) 0 (0%) 0 (0%) 0 (0%) 14m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 0 (0%)
memory 240Mi (0%) 340Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 14m kube-proxy
Normal Starting 14m kubelet Starting kubelet.
Warning CgroupV1 14m kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 14m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 14m kubelet Node offline-docker-649313 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 14m kubelet Node offline-docker-649313 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 14m kubelet Node offline-docker-649313 status is now: NodeHasSufficientPID
Normal RegisteredNode 14m node-controller Node offline-docker-649313 event: Registered Node offline-docker-649313 in Controller
==> dmesg <==
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 5e f7 2b c2 4c 08 06
[Jan27 12:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 4c 3d ea f8 b4 08 06
[ +0.000920] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e 62 2f 71 d4 2d 08 06
[ +23.137502] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f7 41 88 91 9f 08 06
[ +24.627716] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 f8 fc 76 b3 98 08 06
[ +0.000571] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff f6 4c 3d ea f8 b4 08 06
[Jan27 12:58] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev cni0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff de b3 a1 fa dd 13 08 06
[ +0.182785] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff de b3 a1 fa dd 13 08 06
[ +0.020535] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 2e 6d 1f ff 91 08 06
[ +17.269528] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 d8 f3 ff 38 52 08 06
[ +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
[ +0.000004] ll header: 00000000: ff ff ff ff ff ff de b3 a1 fa dd 13 08 06
[ +4.619340] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e 92 11 b9 6f 02 08 06
[ +0.088249] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 03 34 a3 fd 56 08 06
==> etcd [f79d8bf90123] <==
{"level":"info","ts":"2025-01-27T12:44:18.585447Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T12:44:18.585620Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T12:44:18.585693Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T12:44:28.214313Z","caller":"traceutil/trace.go:171","msg":"trace[1878519114] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"116.223905ms","start":"2025-01-27T12:44:28.098062Z","end":"2025-01-27T12:44:28.214286Z","steps":["trace[1878519114] 'process raft request' (duration: 108.816283ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:44:28.434245Z","caller":"traceutil/trace.go:171","msg":"trace[50517336] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"131.765057ms","start":"2025-01-27T12:44:28.302462Z","end":"2025-01-27T12:44:28.434227Z","steps":["trace[50517336] 'process raft request' (duration: 131.700567ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:44:28.434262Z","caller":"traceutil/trace.go:171","msg":"trace[136072165] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"132.443082ms","start":"2025-01-27T12:44:28.301794Z","end":"2025-01-27T12:44:28.434237Z","steps":["trace[136072165] 'process raft request' (duration: 132.313199ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:44:28.434387Z","caller":"traceutil/trace.go:171","msg":"trace[1485636036] transaction","detail":"{read_only:false; response_revision:322; number_of_response:1; }","duration":"131.691457ms","start":"2025-01-27T12:44:28.302689Z","end":"2025-01-27T12:44:28.434380Z","steps":["trace[1485636036] 'process raft request' (duration: 131.49926ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:44:28.434277Z","caller":"traceutil/trace.go:171","msg":"trace[2136396416] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"153.967432ms","start":"2025-01-27T12:44:28.280294Z","end":"2025-01-27T12:44:28.434262Z","steps":["trace[2136396416] 'process raft request' (duration: 84.276722ms)","trace[2136396416] 'compare' (duration: 69.388116ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T12:44:28.596528Z","caller":"traceutil/trace.go:171","msg":"trace[1600605280] transaction","detail":"{read_only:false; response_revision:326; number_of_response:1; }","duration":"131.531049ms","start":"2025-01-27T12:44:28.464967Z","end":"2025-01-27T12:44:28.596498Z","steps":["trace[1600605280] 'process raft request' (duration: 102.250638ms)","trace[1600605280] 'compare' (duration: 29.032642ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T12:44:28.596575Z","caller":"traceutil/trace.go:171","msg":"trace[1899738883] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"131.164682ms","start":"2025-01-27T12:44:28.465374Z","end":"2025-01-27T12:44:28.596539Z","steps":["trace[1899738883] 'process raft request' (duration: 131.096097ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:44:28.596540Z","caller":"traceutil/trace.go:171","msg":"trace[1940056429] linearizableReadLoop","detail":"{readStateIndex:339; appliedIndex:337; }","duration":"131.368951ms","start":"2025-01-27T12:44:28.465147Z","end":"2025-01-27T12:44:28.596516Z","steps":["trace[1940056429] 'read index received' (duration: 46.521085ms)","trace[1940056429] 'applied index is now lower than readState.Index' (duration: 84.846971ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T12:44:28.596622Z","caller":"traceutil/trace.go:171","msg":"trace[460231049] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"131.348244ms","start":"2025-01-27T12:44:28.465257Z","end":"2025-01-27T12:44:28.596605Z","steps":["trace[460231049] 'process raft request' (duration: 131.16131ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:44:28.596690Z","caller":"traceutil/trace.go:171","msg":"trace[903453467] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"131.064057ms","start":"2025-01-27T12:44:28.465612Z","end":"2025-01-27T12:44:28.596676Z","steps":["trace[903453467] 'process raft request' (duration: 130.877363ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T12:44:28.596747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.539035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-668d6bf9bc-6nkx4\" limit:1 ","response":"range_response_count:1 size:3579"}
{"level":"info","ts":"2025-01-27T12:44:28.596806Z","caller":"traceutil/trace.go:171","msg":"trace[964555452] range","detail":"{range_begin:/registry/pods/kube-system/coredns-668d6bf9bc-6nkx4; range_end:; response_count:1; response_revision:329; }","duration":"131.672294ms","start":"2025-01-27T12:44:28.465124Z","end":"2025-01-27T12:44:28.596796Z","steps":["trace[964555452] 'agreement among raft nodes before linearized reading' (duration: 131.44818ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T12:44:28.598768Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.433672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:3995"}
{"level":"info","ts":"2025-01-27T12:44:28.598837Z","caller":"traceutil/trace.go:171","msg":"trace[781463642] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:330; }","duration":"119.530249ms","start":"2025-01-27T12:44:28.479290Z","end":"2025-01-27T12:44:28.598821Z","steps":["trace[781463642] 'agreement among raft nodes before linearized reading' (duration: 119.366463ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:44:28.719582Z","caller":"traceutil/trace.go:171","msg":"trace[1109424790] transaction","detail":"{read_only:false; number_of_response:1; response_revision:332; }","duration":"113.530217ms","start":"2025-01-27T12:44:28.606024Z","end":"2025-01-27T12:44:28.719554Z","steps":["trace[1109424790] 'process raft request' (duration: 113.433367ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T12:44:28.719661Z","caller":"traceutil/trace.go:171","msg":"trace[774799129] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"113.75536ms","start":"2025-01-27T12:44:28.605889Z","end":"2025-01-27T12:44:28.719645Z","steps":["trace[774799129] 'process raft request' (duration: 95.922896ms)","trace[774799129] 'compare' (duration: 17.552421ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T12:44:28.719606Z","caller":"traceutil/trace.go:171","msg":"trace[175832720] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"109.756561ms","start":"2025-01-27T12:44:28.609829Z","end":"2025-01-27T12:44:28.719586Z","steps":["trace[175832720] 'process raft request' (duration: 109.653237ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-27T12:44:37.258390Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.688655ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638350204469583054 > lease_revoke:<id:590694a7ca911ae7>","response":"size:28"}
{"level":"info","ts":"2025-01-27T12:49:18.447257Z","caller":"traceutil/trace.go:171","msg":"trace[213988150] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"157.866585ms","start":"2025-01-27T12:49:18.289353Z","end":"2025-01-27T12:49:18.447220Z","steps":["trace[213988150] 'process raft request' (duration: 91.011989ms)","trace[213988150] 'compare' (duration: 66.659013ms)"],"step_count":2}
{"level":"info","ts":"2025-01-27T12:54:19.086862Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":618}
{"level":"info","ts":"2025-01-27T12:54:19.091406Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":618,"took":"4.286588ms","hash":686489464,"current-db-size-bytes":1863680,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1863680,"current-db-size-in-use":"1.9 MB"}
{"level":"info","ts":"2025-01-27T12:54:19.091440Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":686489464,"revision":618,"compact-revision":-1}
==> kernel <==
12:58:51 up 8:41, 0 users, load average: 3.87, 3.32, 2.72
Linux offline-docker-649313 5.15.0-1074-gcp #83~20.04.1-Ubuntu SMP Wed Dec 18 20:42:35 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [d4f44e36ec71] <==
I0127 12:44:20.577708 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I0127 12:44:20.577734 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0127 12:44:20.577768 1 aggregator.go:171] initial CRD sync complete...
I0127 12:44:20.577776 1 autoregister_controller.go:144] Starting autoregister controller
I0127 12:44:20.577782 1 apf_controller.go:382] Running API Priority and Fairness config worker
I0127 12:44:20.577796 1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
I0127 12:44:20.577789 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0127 12:44:20.577929 1 cache.go:39] Caches are synced for autoregister controller
I0127 12:44:20.577873 1 cache.go:39] Caches are synced for RemoteAvailability controller
I0127 12:44:20.780352 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0127 12:44:21.440853 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0127 12:44:21.445373 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0127 12:44:21.445395 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0127 12:44:21.888800 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0127 12:44:21.926450 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0127 12:44:21.984789 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0127 12:44:21.990224 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0127 12:44:21.991284 1 controller.go:615] quota admission added evaluator for: endpoints
I0127 12:44:21.996268 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0127 12:44:22.499447 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0127 12:44:22.982869 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0127 12:44:22.991528 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0127 12:44:22.998930 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0127 12:44:27.098779 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0127 12:44:27.749857 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
==> kube-controller-manager [1451ed12e7f6] <==
I0127 12:44:27.047624 1 shared_informer.go:320] Caches are synced for endpoint
I0127 12:44:27.048101 1 shared_informer.go:320] Caches are synced for cronjob
I0127 12:44:27.048192 1 shared_informer.go:320] Caches are synced for ReplicaSet
I0127 12:44:27.050059 1 shared_informer.go:320] Caches are synced for bootstrap_signer
I0127 12:44:27.051489 1 shared_informer.go:320] Caches are synced for namespace
I0127 12:44:27.051522 1 shared_informer.go:320] Caches are synced for resource quota
I0127 12:44:27.051625 1 shared_informer.go:320] Caches are synced for job
I0127 12:44:27.059497 1 shared_informer.go:320] Caches are synced for garbage collector
I0127 12:44:27.061702 1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
I0127 12:44:27.069038 1 shared_informer.go:320] Caches are synced for crt configmap
I0127 12:44:27.991165 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="offline-docker-649313"
I0127 12:44:28.513386 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.411166073s"
I0127 12:44:28.600348 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.687529ms"
I0127 12:44:28.600469 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="80.541µs"
I0127 12:44:28.720923 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="68.806µs"
I0127 12:44:28.761584 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.546µs"
I0127 12:44:30.328680 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="71.236µs"
I0127 12:44:30.370093 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="80.587µs"
I0127 12:44:33.285369 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="offline-docker-649313"
I0127 12:45:02.369366 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.38088ms"
I0127 12:45:02.369508 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="60.998µs"
I0127 12:45:02.391337 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="8.240792ms"
I0127 12:45:02.391443 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.63µs"
I0127 12:48:48.723448 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="offline-docker-649313"
I0127 12:53:53.779383 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="offline-docker-649313"
==> kube-proxy [a971d29cc752] <==
I0127 12:44:29.375739 1 server_linux.go:66] "Using iptables proxy"
I0127 12:44:29.596702 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
E0127 12:44:29.596773 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0127 12:44:29.623689 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0127 12:44:29.623771 1 server_linux.go:170] "Using iptables Proxier"
I0127 12:44:29.626176 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0127 12:44:29.626677 1 server.go:497] "Version info" version="v1.32.1"
I0127 12:44:29.626704 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 12:44:29.629191 1 config.go:199] "Starting service config controller"
I0127 12:44:29.629222 1 shared_informer.go:313] Waiting for caches to sync for service config
I0127 12:44:29.629250 1 config.go:105] "Starting endpoint slice config controller"
I0127 12:44:29.629253 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0127 12:44:29.629776 1 config.go:329] "Starting node config controller"
I0127 12:44:29.629785 1 shared_informer.go:313] Waiting for caches to sync for node config
I0127 12:44:29.729881 1 shared_informer.go:320] Caches are synced for node config
I0127 12:44:29.729899 1 shared_informer.go:320] Caches are synced for service config
I0127 12:44:29.729935 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [6e1305f891a4] <==
W0127 12:44:20.580428 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0127 12:44:20.582824 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:44:20.580506 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0127 12:44:20.582877 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 12:44:20.580575 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0127 12:44:20.582931 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0127 12:44:20.580119 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0127 12:44:20.582952 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:44:20.581876 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 12:44:20.582972 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:44:20.582237 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0127 12:44:20.582990 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:44:21.389607 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0127 12:44:21.389663 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:44:21.439086 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0127 12:44:21.439229 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 12:44:21.473812 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0127 12:44:21.473859 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0127 12:44:21.580813 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0127 12:44:21.580859 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 12:44:21.678060 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0127 12:44:21.678115 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 12:44:21.719523 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 12:44:21.719573 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0127 12:44:22.221886 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: I0127 12:44:27.864388 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2371845-b951-4e52-9c2a-01a394a9b403-xtables-lock\") pod \"kube-proxy-nwtdt\" (UID: \"a2371845-b951-4e52-9c2a-01a394a9b403\") " pod="kube-system/kube-proxy-nwtdt"
Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: I0127 12:44:27.864444 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qlx\" (UniqueName: \"kubernetes.io/projected/a2371845-b951-4e52-9c2a-01a394a9b403-kube-api-access-46qlx\") pod \"kube-proxy-nwtdt\" (UID: \"a2371845-b951-4e52-9c2a-01a394a9b403\") " pod="kube-system/kube-proxy-nwtdt"
Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: I0127 12:44:27.864481 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2371845-b951-4e52-9c2a-01a394a9b403-lib-modules\") pod \"kube-proxy-nwtdt\" (UID: \"a2371845-b951-4e52-9c2a-01a394a9b403\") " pod="kube-system/kube-proxy-nwtdt"
Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: E0127 12:44:27.993172 2511 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: E0127 12:44:27.993273 2511 projected.go:194] Error preparing data for projected volume kube-api-access-46qlx for pod kube-system/kube-proxy-nwtdt: configmap "kube-root-ca.crt" not found
Jan 27 12:44:27 offline-docker-649313 kubelet[2511]: E0127 12:44:27.993397 2511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2371845-b951-4e52-9c2a-01a394a9b403-kube-api-access-46qlx podName:a2371845-b951-4e52-9c2a-01a394a9b403 nodeName:}" failed. No retries permitted until 2025-01-27 12:44:28.49336454 +0000 UTC m=+5.768326501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-46qlx" (UniqueName: "kubernetes.io/projected/a2371845-b951-4e52-9c2a-01a394a9b403-kube-api-access-46qlx") pod "kube-proxy-nwtdt" (UID: "a2371845-b951-4e52-9c2a-01a394a9b403") : configmap "kube-root-ca.crt" not found
Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.467623 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzfzj\" (UniqueName: \"kubernetes.io/projected/44bc4f70-dd40-4791-864c-0458af6a5fe8-kube-api-access-dzfzj\") pod \"coredns-668d6bf9bc-6nkx4\" (UID: \"44bc4f70-dd40-4791-864c-0458af6a5fe8\") " pod="kube-system/coredns-668d6bf9bc-6nkx4"
Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.467693 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44bc4f70-dd40-4791-864c-0458af6a5fe8-config-volume\") pod \"coredns-668d6bf9bc-6nkx4\" (UID: \"44bc4f70-dd40-4791-864c-0458af6a5fe8\") " pod="kube-system/coredns-668d6bf9bc-6nkx4"
Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.569030 2511 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.769359 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6-config-volume\") pod \"coredns-668d6bf9bc-7rv77\" (UID: \"d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6\") " pod="kube-system/coredns-668d6bf9bc-7rv77"
Jan 27 12:44:28 offline-docker-649313 kubelet[2511]: I0127 12:44:28.769419 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b65k\" (UniqueName: \"kubernetes.io/projected/d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6-kube-api-access-9b65k\") pod \"coredns-668d6bf9bc-7rv77\" (UID: \"d7e0cbe8-c62a-4dc3-9cec-e8acfea42dd6\") " pod="kube-system/coredns-668d6bf9bc-7rv77"
Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.071836 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/56cf6fce-41be-4b78-9a32-86e8e902d97c-tmp\") pod \"storage-provisioner\" (UID: \"56cf6fce-41be-4b78-9a32-86e8e902d97c\") " pod="kube-system/storage-provisioner"
Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.071915 2511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmp5\" (UniqueName: \"kubernetes.io/projected/56cf6fce-41be-4b78-9a32-86e8e902d97c-kube-api-access-psmp5\") pod \"storage-provisioner\" (UID: \"56cf6fce-41be-4b78-9a32-86e8e902d97c\") " pod="kube-system/storage-provisioner"
Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.171721 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d149963c0f5417a4bfd7ff76fba93e74bcbe5c8567fe8c7e92dfc73f237f629"
Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.177250 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e922fc5b50a5e3f2fbbdf479a25e30299c64aab3e00b7640846d5596550e0eb"
Jan 27 12:44:29 offline-docker-649313 kubelet[2511]: I0127 12:44:29.296675 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6784fe75024c94477de9c9dcddf350673b727ec274233c2977c05c966e42d22b"
Jan 27 12:44:30 offline-docker-649313 kubelet[2511]: I0127 12:44:30.344930 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nwtdt" podStartSLOduration=3.344906105 podStartE2EDuration="3.344906105s" podCreationTimestamp="2025-01-27 12:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:44:30.344724475 +0000 UTC m=+7.619686438" watchObservedRunningTime="2025-01-27 12:44:30.344906105 +0000 UTC m=+7.619868078"
Jan 27 12:44:30 offline-docker-649313 kubelet[2511]: I0127 12:44:30.345051 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6nkx4" podStartSLOduration=2.345042633 podStartE2EDuration="2.345042633s" podCreationTimestamp="2025-01-27 12:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:44:30.328447552 +0000 UTC m=+7.603409516" watchObservedRunningTime="2025-01-27 12:44:30.345042633 +0000 UTC m=+7.620004596"
Jan 27 12:44:30 offline-docker-649313 kubelet[2511]: I0127 12:44:30.371449 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7rv77" podStartSLOduration=2.371417563 podStartE2EDuration="2.371417563s" podCreationTimestamp="2025-01-27 12:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:44:30.369935694 +0000 UTC m=+7.644897657" watchObservedRunningTime="2025-01-27 12:44:30.371417563 +0000 UTC m=+7.646379521"
Jan 27 12:44:30 offline-docker-649313 kubelet[2511]: I0127 12:44:30.371647 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.371633287 podStartE2EDuration="2.371633287s" podCreationTimestamp="2025-01-27 12:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:44:30.36100207 +0000 UTC m=+7.635964033" watchObservedRunningTime="2025-01-27 12:44:30.371633287 +0000 UTC m=+7.646595250"
Jan 27 12:44:31 offline-docker-649313 kubelet[2511]: I0127 12:44:31.340575 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 27 12:44:31 offline-docker-649313 kubelet[2511]: I0127 12:44:31.340579 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 27 12:44:33 offline-docker-649313 kubelet[2511]: I0127 12:44:33.270132 2511 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jan 27 12:44:33 offline-docker-649313 kubelet[2511]: I0127 12:44:33.271156 2511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jan 27 12:45:00 offline-docker-649313 kubelet[2511]: I0127 12:45:00.524439 2511 scope.go:117] "RemoveContainer" containerID="d47febb25c7020fe2e70988c4383aadc026bfb71145087b6df8688601199e639"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p offline-docker-649313 -n offline-docker-649313
helpers_test.go:261: (dbg) Run: kubectl --context offline-docker-649313 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestOffline FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "offline-docker-649313" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p offline-docker-649313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-649313: (2.198024776s)
--- FAIL: TestOffline (904.14s)