=== RUN TestCertExpiration
=== PAUSE TestCertExpiration
=== CONT TestCertExpiration
cert_options_test.go:123: (dbg) Run: out/minikube-darwin-amd64 start -p cert-expiration-306000 --memory=2048 --cert-expiration=3m --driver=hyperkit
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-306000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 90 (14.92383706s)
-- stdout --
* [cert-expiration-306000] minikube v1.32.0 on Darwin 14.2.1
- MINIKUBE_LOCATION=17866
- KUBECONFIG=/Users/jenkins/minikube-integration/17866-67452/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-67452/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on user configuration
* Starting control plane node cert-expiration-306000 in cluster cert-expiration-306000
* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
-- /stdout --
** stderr **
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
sudo journalctl --no-pager -u cri-docker.socket:
-- stdout --
-- Journal begins at Tue 2024-01-09 01:55:56 UTC, ends at Tue 2024-01-09 01:56:02 UTC. --
Jan 09 01:55:57 minikube systemd[1]: Starting CRI Docker Socket for the API.
Jan 09 01:55:57 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Jan 09 01:56:00 cert-expiration-306000 systemd[1]: cri-docker.socket: Succeeded.
Jan 09 01:56:00 cert-expiration-306000 systemd[1]: Closed CRI Docker Socket for the API.
Jan 09 01:56:00 cert-expiration-306000 systemd[1]: Stopping CRI Docker Socket for the API.
Jan 09 01:56:00 cert-expiration-306000 systemd[1]: Starting CRI Docker Socket for the API.
Jan 09 01:56:00 cert-expiration-306000 systemd[1]: Listening on CRI Docker Socket for the API.
Jan 09 01:56:02 cert-expiration-306000 systemd[1]: cri-docker.socket: Succeeded.
Jan 09 01:56:02 cert-expiration-306000 systemd[1]: Closed CRI Docker Socket for the API.
Jan 09 01:56:02 cert-expiration-306000 systemd[1]: Stopping CRI Docker Socket for the API.
Jan 09 01:56:02 cert-expiration-306000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
Jan 09 01:56:02 cert-expiration-306000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
-- /stdout --
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-306000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 90
cert_options_test.go:131: (dbg) Run: out/minikube-darwin-amd64 start -p cert-expiration-306000 --memory=2048 --cert-expiration=8760h --driver=hyperkit
E0108 17:59:05.094348 67896 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/skaffold-457000/client.crt: no such file or directory
E0108 17:59:15.335404 67896 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/skaffold-457000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-306000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (22.331667523s)
cert_options_test.go:136: minikube start output did not warn about expired certs:
-- stdout --
* [cert-expiration-306000] minikube v1.32.0 on Darwin 14.2.1
- MINIKUBE_LOCATION=17866
- KUBECONFIG=/Users/jenkins/minikube-integration/17866-67452/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-67452/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on existing profile
* Starting control plane node cert-expiration-306000 in cluster cert-expiration-306000
* Updating the running hyperkit "cert-expiration-306000" VM ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "cert-expiration-306000" cluster and "default" namespace by default
-- /stdout --
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-01-08 17:59:25.407862 -0800 PST m=+2160.437805670
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-306000 -n cert-expiration-306000
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p cert-expiration-306000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p cert-expiration-306000 logs -n 25: (1.68397304s)
helpers_test.go:252: TestCertExpiration logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-525000 sudo cat | cilium-525000 | jenkins | v1.32.0 | 08 Jan 24 17:54 PST | |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p cilium-525000 sudo | cilium-525000 | jenkins | v1.32.0 | 08 Jan 24 17:54 PST | |
| | containerd config dump | | | | | |
| ssh | -p cilium-525000 sudo | cilium-525000 | jenkins | v1.32.0 | 08 Jan 24 17:54 PST | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-525000 sudo | cilium-525000 | jenkins | v1.32.0 | 08 Jan 24 17:54 PST | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-525000 sudo find | cilium-525000 | jenkins | v1.32.0 | 08 Jan 24 17:54 PST | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-525000 sudo crio | cilium-525000 | jenkins | v1.32.0 | 08 Jan 24 17:54 PST | |
| | config | | | | | |
| delete | -p cilium-525000 | cilium-525000 | jenkins | v1.32.0 | 08 Jan 24 17:54 PST | 08 Jan 24 17:54 PST |
| start | -p force-systemd-env-255000 | force-systemd-env-255000 | jenkins | v1.32.0 | 08 Jan 24 17:54 PST | 08 Jan 24 17:55 PST |
| | --memory=2048 | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p offline-docker-921000 | offline-docker-921000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| start | -p force-systemd-flag-388000 | force-systemd-flag-388000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | force-systemd-env-255000 | force-systemd-env-255000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-env-255000 | force-systemd-env-255000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| start | -p docker-flags-598000 | docker-flags-598000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| | --cache-images=false | | | | | |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=false | | | | | |
| | --docker-env=FOO=BAR | | | | | |
| | --docker-env=BAZ=BAT | | | | | |
| | --docker-opt=debug | | | | | |
| | --docker-opt=icc=true | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | force-systemd-flag-388000 | force-systemd-flag-388000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-flag-388000 | force-systemd-flag-388000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| start | -p cert-expiration-306000 | cert-expiration-306000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | docker-flags-598000 ssh | docker-flags-598000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-598000 ssh | docker-flags-598000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:55 PST |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-598000 | docker-flags-598000 | jenkins | v1.32.0 | 08 Jan 24 17:55 PST | 08 Jan 24 17:56 PST |
| start | -p cert-options-138000 | cert-options-138000 | jenkins | v1.32.0 | 08 Jan 24 17:56 PST | 08 Jan 24 17:56 PST |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | cert-options-138000 ssh | cert-options-138000 | jenkins | v1.32.0 | 08 Jan 24 17:56 PST | 08 Jan 24 17:56 PST |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-138000 -- sudo | cert-options-138000 | jenkins | v1.32.0 | 08 Jan 24 17:56 PST | 08 Jan 24 17:56 PST |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-138000 | cert-options-138000 | jenkins | v1.32.0 | 08 Jan 24 17:56 PST | 08 Jan 24 17:56 PST |
| start | -p running-upgrade-305000 | running-upgrade-305000 | jenkins | v1.32.0 | 08 Jan 24 17:58 PST | |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p cert-expiration-306000 | cert-expiration-306000 | jenkins | v1.32.0 | 08 Jan 24 17:59 PST | 08 Jan 24 17:59 PST |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=hyperkit | | | | | |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/01/08 17:59:03
Running on machine: MacOS-Agent-2
Binary: Built with gc go1.21.5 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0108 17:59:03.131904 71083 out.go:296] Setting OutFile to fd 1 ...
I0108 17:59:03.132187 71083 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 17:59:03.132191 71083 out.go:309] Setting ErrFile to fd 2...
I0108 17:59:03.132194 71083 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 17:59:03.132391 71083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17866-67452/.minikube/bin
I0108 17:59:03.133840 71083 out.go:303] Setting JSON to false
I0108 17:59:03.156559 71083 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":32315,"bootTime":1704733228,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2.1","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0108 17:59:03.156661 71083 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0108 17:59:03.180199 71083 out.go:177] * [cert-expiration-306000] minikube v1.32.0 on Darwin 14.2.1
I0108 17:59:03.263898 71083 out.go:177] - MINIKUBE_LOCATION=17866
I0108 17:59:03.242217 71083 notify.go:220] Checking for updates...
I0108 17:59:03.305624 71083 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/17866-67452/kubeconfig
I0108 17:59:03.326917 71083 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0108 17:59:03.347728 71083 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 17:59:03.368897 71083 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17866-67452/.minikube
I0108 17:59:03.389940 71083 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0108 17:59:03.411601 71083 config.go:182] Loaded profile config "cert-expiration-306000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 17:59:03.412454 71083 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:03.412551 71083 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0108 17:59:03.421690 71083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57578
I0108 17:59:03.422074 71083 main.go:141] libmachine: () Calling .GetVersion
I0108 17:59:03.422502 71083 main.go:141] libmachine: Using API Version 1
I0108 17:59:03.422518 71083 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 17:59:03.422724 71083 main.go:141] libmachine: () Calling .GetMachineName
I0108 17:59:03.422875 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:03.423075 71083 driver.go:392] Setting default libvirt URI to qemu:///system
I0108 17:59:03.423322 71083 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:03.423342 71083 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0108 17:59:03.431272 71083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57580
I0108 17:59:03.431591 71083 main.go:141] libmachine: () Calling .GetVersion
I0108 17:59:03.431945 71083 main.go:141] libmachine: Using API Version 1
I0108 17:59:03.431961 71083 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 17:59:03.432173 71083 main.go:141] libmachine: () Calling .GetMachineName
I0108 17:59:03.432252 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:03.460681 71083 out.go:177] * Using the hyperkit driver based on existing profile
I0108 17:59:03.502861 71083 start.go:298] selected driver: hyperkit
I0108 17:59:03.502879 71083 start.go:902] validating driver "hyperkit" against &{Name:cert-expiration-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-306000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.157 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0108 17:59:03.503059 71083 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 17:59:03.507393 71083 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 17:59:03.507488 71083 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17866-67452/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0108 17:59:03.515649 71083 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
I0108 17:59:03.519685 71083 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:03.519701 71083 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0108 17:59:03.519810 71083 cni.go:84] Creating CNI manager for ""
I0108 17:59:03.519824 71083 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0108 17:59:03.519834 71083 start_flags.go:321] config:
{Name:cert-expiration-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-306000 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.157 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0108 17:59:03.520005 71083 iso.go:125] acquiring lock: {Name:mkc1c28ece5249fd4dc2850e22c4f0a2e79ae425 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 17:59:03.561821 71083 out.go:177] * Starting control plane node cert-expiration-306000 in cluster cert-expiration-306000
I0108 17:59:03.582855 71083 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0108 17:59:03.582927 71083 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17866-67452/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I0108 17:59:03.582954 71083 cache.go:56] Caching tarball of preloaded images
I0108 17:59:03.583167 71083 preload.go:174] Found /Users/jenkins/minikube-integration/17866-67452/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0108 17:59:03.583180 71083 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0108 17:59:03.583315 71083 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/config.json ...
I0108 17:59:03.584376 71083 start.go:365] acquiring machines lock for cert-expiration-306000: {Name:mk2ee15bf972348ae639dd71c5c6879c5a0da850 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0108 17:59:03.584467 71083 start.go:369] acquired machines lock for "cert-expiration-306000" in 74.142µs
I0108 17:59:03.584495 71083 start.go:96] Skipping create...Using existing machine configuration
I0108 17:59:03.584502 71083 fix.go:54] fixHost starting:
I0108 17:59:03.584912 71083 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:03.584953 71083 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0108 17:59:03.593833 71083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57582
I0108 17:59:03.594205 71083 main.go:141] libmachine: () Calling .GetVersion
I0108 17:59:03.594576 71083 main.go:141] libmachine: Using API Version 1
I0108 17:59:03.594588 71083 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 17:59:03.594813 71083 main.go:141] libmachine: () Calling .GetMachineName
I0108 17:59:03.594929 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:03.595022 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetState
I0108 17:59:03.595102 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 17:59:03.595177 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | hyperkit pid from json: 70842
I0108 17:59:03.596284 71083 fix.go:102] recreateIfNeeded on cert-expiration-306000: state=Running err=<nil>
W0108 17:59:03.596297 71083 fix.go:128] unexpected machine state, will restart: <nil>
I0108 17:59:03.617639 71083 out.go:177] * Updating the running hyperkit "cert-expiration-306000" VM ...
I0108 17:59:03.638906 71083 machine.go:88] provisioning docker machine ...
I0108 17:59:03.638927 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:03.639246 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetMachineName
I0108 17:59:03.639482 71083 buildroot.go:166] provisioning hostname "cert-expiration-306000"
I0108 17:59:03.639500 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetMachineName
I0108 17:59:03.639721 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:03.639947 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:03.640165 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:03.640376 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:03.640545 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:03.640798 71083 main.go:141] libmachine: Using SSH client type: native
I0108 17:59:03.641359 71083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 192.169.0.157 22 <nil> <nil>}
I0108 17:59:03.641369 71083 main.go:141] libmachine: About to run SSH command:
sudo hostname cert-expiration-306000 && echo "cert-expiration-306000" | sudo tee /etc/hostname
I0108 17:59:03.707531 71083 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-306000
I0108 17:59:03.707564 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:03.707716 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:03.707854 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:03.707956 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:03.708039 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:03.708164 71083 main.go:141] libmachine: Using SSH client type: native
I0108 17:59:03.708425 71083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 192.169.0.157 22 <nil> <nil>}
I0108 17:59:03.708437 71083 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\scert-expiration-306000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-306000/g' /etc/hosts;
else
echo '127.0.1.1 cert-expiration-306000' | sudo tee -a /etc/hosts;
fi
fi
I0108 17:59:03.764075 71083 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0108 17:59:03.764087 71083 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17866-67452/.minikube CaCertPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17866-67452/.minikube}
I0108 17:59:03.764097 71083 buildroot.go:174] setting up certificates
I0108 17:59:03.764119 71083 provision.go:83] configureAuth start
I0108 17:59:03.764125 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetMachineName
I0108 17:59:03.764264 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetIP
I0108 17:59:03.764345 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:03.764417 71083 provision.go:138] copyHostCerts
I0108 17:59:03.764497 71083 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-67452/.minikube/ca.pem, removing ...
I0108 17:59:03.764503 71083 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-67452/.minikube/ca.pem
I0108 17:59:03.764682 71083 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17866-67452/.minikube/ca.pem (1082 bytes)
I0108 17:59:03.764926 71083 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-67452/.minikube/cert.pem, removing ...
I0108 17:59:03.764929 71083 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-67452/.minikube/cert.pem
I0108 17:59:03.764996 71083 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17866-67452/.minikube/cert.pem (1123 bytes)
I0108 17:59:03.765168 71083 exec_runner.go:144] found /Users/jenkins/minikube-integration/17866-67452/.minikube/key.pem, removing ...
I0108 17:59:03.765171 71083 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17866-67452/.minikube/key.pem
I0108 17:59:03.765232 71083 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17866-67452/.minikube/key.pem (1679 bytes)
I0108 17:59:03.765372 71083 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-306000 san=[192.169.0.157 192.169.0.157 localhost 127.0.0.1 minikube cert-expiration-306000]
I0108 17:59:03.970750 71083 provision.go:172] copyRemoteCerts
I0108 17:59:03.970803 71083 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 17:59:03.970818 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:03.970957 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:03.971039 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:03.971134 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:03.971230 71083 sshutil.go:53] new ssh client: &{IP:192.169.0.157 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/machines/cert-expiration-306000/id_rsa Username:docker}
I0108 17:59:04.005672 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0108 17:59:04.020730 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0108 17:59:04.035731 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0108 17:59:04.050772 71083 provision.go:86] duration metric: configureAuth took 286.648436ms
I0108 17:59:04.050786 71083 buildroot.go:189] setting minikube options for container-runtime
I0108 17:59:04.050919 71083 config.go:182] Loaded profile config "cert-expiration-306000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 17:59:04.050929 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:04.051057 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:04.051146 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:04.051223 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.051299 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.051377 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:04.051482 71083 main.go:141] libmachine: Using SSH client type: native
I0108 17:59:04.051717 71083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 192.169.0.157 22 <nil> <nil>}
I0108 17:59:04.051722 71083 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0108 17:59:04.108757 71083 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0108 17:59:04.108764 71083 buildroot.go:70] root file system type: tmpfs
I0108 17:59:04.108846 71083 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0108 17:59:04.108859 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:04.108977 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:04.109060 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.109128 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.109213 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:04.109336 71083 main.go:141] libmachine: Using SSH client type: native
I0108 17:59:04.109579 71083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 192.169.0.157 22 <nil> <nil>}
I0108 17:59:04.109623 71083 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0108 17:59:04.174786 71083 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0108 17:59:04.174804 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:04.174935 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:04.175025 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.175111 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.175190 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:04.175319 71083 main.go:141] libmachine: Using SSH client type: native
I0108 17:59:04.175564 71083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 192.169.0.157 22 <nil> <nil>}
I0108 17:59:04.175574 71083 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0108 17:59:04.235889 71083 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0108 17:59:04.235899 71083 machine.go:91] provisioned docker machine in 596.987408ms
I0108 17:59:04.235906 71083 start.go:300] post-start starting for "cert-expiration-306000" (driver="hyperkit")
I0108 17:59:04.235912 71083 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 17:59:04.235919 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:04.236089 71083 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 17:59:04.236101 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:04.236184 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:04.236248 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.236326 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:04.236410 71083 sshutil.go:53] new ssh client: &{IP:192.169.0.157 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/machines/cert-expiration-306000/id_rsa Username:docker}
I0108 17:59:04.269908 71083 ssh_runner.go:195] Run: cat /etc/os-release
I0108 17:59:04.272526 71083 info.go:137] Remote host: Buildroot 2021.02.12
I0108 17:59:04.272540 71083 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-67452/.minikube/addons for local assets ...
I0108 17:59:04.272619 71083 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17866-67452/.minikube/files for local assets ...
I0108 17:59:04.272755 71083 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17866-67452/.minikube/files/etc/ssl/certs/678962.pem -> 678962.pem in /etc/ssl/certs
I0108 17:59:04.272906 71083 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 17:59:04.279184 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/files/etc/ssl/certs/678962.pem --> /etc/ssl/certs/678962.pem (1708 bytes)
I0108 17:59:04.294199 71083 start.go:303] post-start completed in 58.287663ms
I0108 17:59:04.294210 71083 fix.go:56] fixHost completed within 709.711969ms
I0108 17:59:04.294222 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:04.294352 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:04.294433 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.294507 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.294588 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:04.294696 71083 main.go:141] libmachine: Using SSH client type: native
I0108 17:59:04.294933 71083 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x14074c0] 0x140a1a0 <nil> [] 0s} 192.169.0.157 22 <nil> <nil>}
I0108 17:59:04.294938 71083 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0108 17:59:04.350368 71083 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704765544.456104084
I0108 17:59:04.350378 71083 fix.go:206] guest clock: 1704765544.456104084
I0108 17:59:04.350382 71083 fix.go:219] Guest: 2024-01-08 17:59:04.456104084 -0800 PST Remote: 2024-01-08 17:59:04.294211 -0800 PST m=+1.208985930 (delta=161.893084ms)
I0108 17:59:04.350398 71083 fix.go:190] guest clock delta is within tolerance: 161.893084ms
I0108 17:59:04.350403 71083 start.go:83] releasing machines lock for "cert-expiration-306000", held for 765.932385ms
I0108 17:59:04.350427 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:04.350544 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetIP
I0108 17:59:04.350627 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:04.350902 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:04.351001 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:04.351073 71083 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0108 17:59:04.351095 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:04.351122 71083 ssh_runner.go:195] Run: cat /version.json
I0108 17:59:04.351130 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:04.351189 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:04.351213 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:04.351289 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.351306 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:04.351383 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:04.351390 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:04.351464 71083 sshutil.go:53] new ssh client: &{IP:192.169.0.157 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/machines/cert-expiration-306000/id_rsa Username:docker}
I0108 17:59:04.351479 71083 sshutil.go:53] new ssh client: &{IP:192.169.0.157 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/machines/cert-expiration-306000/id_rsa Username:docker}
I0108 17:59:04.382750 71083 ssh_runner.go:195] Run: systemctl --version
I0108 17:59:04.386741 71083 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0108 17:59:04.438101 71083 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0108 17:59:04.438190 71083 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0108 17:59:04.444535 71083 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0108 17:59:04.444542 71083 start.go:475] detecting cgroup driver to use...
I0108 17:59:04.444645 71083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 17:59:04.456370 71083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0108 17:59:04.462709 71083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0108 17:59:04.468954 71083 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0108 17:59:04.468989 71083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0108 17:59:04.475286 71083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0108 17:59:04.481553 71083 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0108 17:59:04.487743 71083 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0108 17:59:04.494168 71083 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0108 17:59:04.500682 71083 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0108 17:59:04.507079 71083 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0108 17:59:04.512786 71083 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0108 17:59:04.518531 71083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 17:59:04.599045 71083 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0108 17:59:04.611369 71083 start.go:475] detecting cgroup driver to use...
I0108 17:59:04.611443 71083 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0108 17:59:04.620781 71083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0108 17:59:04.633866 71083 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0108 17:59:04.648265 71083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0108 17:59:04.656450 71083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 17:59:04.664606 71083 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 17:59:04.677453 71083 ssh_runner.go:195] Run: which cri-dockerd
I0108 17:59:04.679870 71083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0108 17:59:04.685373 71083 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0108 17:59:04.696426 71083 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0108 17:59:04.778133 71083 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0108 17:59:04.868358 71083 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I0108 17:59:04.868423 71083 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0108 17:59:04.879702 71083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 17:59:04.968118 71083 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 17:59:06.227006 71083 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.258878279s)
I0108 17:59:06.227057 71083 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0108 17:59:06.311611 71083 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0108 17:59:06.402440 71083 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0108 17:59:06.498127 71083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 17:59:06.595496 71083 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0108 17:59:06.610489 71083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 17:59:06.701926 71083 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0108 17:59:06.753478 71083 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0108 17:59:06.753553 71083 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0108 17:59:06.757317 71083 start.go:543] Will wait 60s for crictl version
I0108 17:59:06.757360 71083 ssh_runner.go:195] Run: which crictl
I0108 17:59:06.760048 71083 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0108 17:59:06.794511 71083 start.go:559] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.7
RuntimeApiVersion: v1
I0108 17:59:06.794574 71083 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 17:59:06.812047 71083 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 17:59:06.871705 71083 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
I0108 17:59:06.871748 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetIP
I0108 17:59:06.872117 71083 ssh_runner.go:195] Run: grep 192.169.0.1 host.minikube.internal$ /etc/hosts
I0108 17:59:06.876258 71083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 17:59:06.884073 71083 localpath.go:92] copying /Users/jenkins/minikube-integration/17866-67452/.minikube/client.crt -> /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/client.crt
I0108 17:59:06.884398 71083 localpath.go:117] copying /Users/jenkins/minikube-integration/17866-67452/.minikube/client.key -> /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/client.key
I0108 17:59:06.884618 71083 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0108 17:59:06.884689 71083 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 17:59:06.896524 71083 docker.go:671] Got preloaded images:
I0108 17:59:06.896534 71083 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
I0108 17:59:06.896575 71083 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0108 17:59:06.903194 71083 ssh_runner.go:195] Run: which lz4
I0108 17:59:06.905680 71083 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0108 17:59:06.908262 71083 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0108 17:59:06.908275 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
I0108 17:59:08.055766 71083 docker.go:635] Took 1.150127 seconds to copy over tarball
I0108 17:59:08.055836 71083 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0108 17:59:10.229542 70990 ssh_runner.go:235] Completed: docker stop e0d35124bb53 cf57257413ef e6c2aed6696a 8dbce97d7c04 520dc40c6e4c 0231581677fc 262c5bae009a 46f3b2816672 4f101e96f111 e5509ffd08f2 5683e0c9df73 a9a6747f7152 fad621c7cfb6 137abc4a14c1 9991d6d03754 493644811ca4 bafa5980e6b2 cf665cf89091 07140a5cd60c ef60fb444d85 3e52602dd87f 09c1443cc61e 005a5e74faef 3108c2a294ee 28df2fdc4704 710f1ccad12d dcfbbd81c7be: (10.235860535s)
I0108 17:59:10.229606 70990 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0108 17:59:10.236594 70990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 17:59:10.240925 70990 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5625 Jan 9 01:57 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Jan 9 01:57 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1981 Jan 9 01:58 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5605 Jan 9 01:57 /etc/kubernetes/scheduler.conf
I0108 17:59:10.240983 70990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0108 17:59:10.244386 70990 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 1
stdout:
stderr:
I0108 17:59:10.244439 70990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0108 17:59:10.248449 70990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0108 17:59:10.251799 70990 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
stdout:
stderr:
I0108 17:59:10.251842 70990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0108 17:59:10.255562 70990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0108 17:59:10.258803 70990 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0108 17:59:10.258851 70990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0108 17:59:10.262844 70990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0108 17:59:10.266503 70990 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0108 17:59:10.266559 70990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0108 17:59:10.270353 70990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 17:59:10.274208 70990 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0108 17:59:10.274220 70990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0108 17:59:10.315246 70990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0108 17:59:10.976527 71083 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.920688947s)
I0108 17:59:10.976538 71083 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0108 17:59:11.009050 71083 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0108 17:59:11.017216 71083 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
I0108 17:59:11.030842 71083 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 17:59:11.124458 71083 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 17:59:12.793753 71083 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.669290355s)
I0108 17:59:12.793848 71083 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 17:59:12.807109 71083 docker.go:671] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0108 17:59:12.807125 71083 cache_images.go:84] Images are preloaded, skipping loading
I0108 17:59:12.807197 71083 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0108 17:59:12.825033 71083 cni.go:84] Creating CNI manager for ""
I0108 17:59:12.825049 71083 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0108 17:59:12.825075 71083 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0108 17:59:12.825094 71083 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.157 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-306000 NodeName:cert-expiration-306000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0108 17:59:12.825196 71083 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.169.0.157
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "cert-expiration-306000"
kubeletExtraArgs:
node-ip: 192.169.0.157
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.169.0.157"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0108 17:59:12.825252 71083 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=cert-expiration-306000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.157
[Install]
config:
{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-306000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0108 17:59:12.825308 71083 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
I0108 17:59:12.831222 71083 binaries.go:44] Found k8s binaries, skipping transfer
I0108 17:59:12.831264 71083 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0108 17:59:12.836973 71083 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (384 bytes)
I0108 17:59:12.848049 71083 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0108 17:59:12.859128 71083 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
I0108 17:59:12.870148 71083 ssh_runner.go:195] Run: grep 192.169.0.157 control-plane.minikube.internal$ /etc/hosts
I0108 17:59:12.872741 71083 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.157 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0108 17:59:12.881195 71083 certs.go:56] Setting up /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000 for IP: 192.169.0.157
I0108 17:59:12.881212 71083 certs.go:190] acquiring lock for shared ca certs: {Name:mk65fb56988500bbbc6096ce8691c72181401bb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 17:59:12.881350 71083 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17866-67452/.minikube/ca.key
I0108 17:59:12.881395 71083 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17866-67452/.minikube/proxy-client-ca.key
I0108 17:59:12.881484 71083 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/client.key
I0108 17:59:12.881500 71083 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.key.d4da38e9
I0108 17:59:12.881515 71083 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.crt.d4da38e9 with IP's: [192.169.0.157 10.96.0.1 127.0.0.1 10.0.0.1]
I0108 17:59:12.979238 71083 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.crt.d4da38e9 ...
I0108 17:59:12.979247 71083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.crt.d4da38e9: {Name:mk8bf10158596498ea7cdebb6a7db56747b46938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 17:59:12.979568 71083 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.key.d4da38e9 ...
I0108 17:59:12.979576 71083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.key.d4da38e9: {Name:mk8ece666a7fb46682e11725dc7b75b66432aedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 17:59:12.979770 71083 certs.go:337] copying /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.crt.d4da38e9 -> /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.crt
I0108 17:59:12.980004 71083 certs.go:341] copying /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.key.d4da38e9 -> /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.key
I0108 17:59:12.980186 71083 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/proxy-client.key
I0108 17:59:12.980202 71083 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/proxy-client.crt with IP's: []
I0108 17:59:13.218891 71083 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/proxy-client.crt ...
I0108 17:59:13.218900 71083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/proxy-client.crt: {Name:mkaf2fac46dcafcc4a5333b8a94d36c1dc9ef843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 17:59:13.219226 71083 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/proxy-client.key ...
I0108 17:59:13.219236 71083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/proxy-client.key: {Name:mk27590700d8cc6a4993d814519c525370094e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 17:59:13.219629 71083 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/67896.pem (1338 bytes)
W0108 17:59:13.219672 71083 certs.go:433] ignoring /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/67896_empty.pem, impossibly tiny 0 bytes
I0108 17:59:13.219683 71083 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/ca-key.pem (1679 bytes)
I0108 17:59:13.219715 71083 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/ca.pem (1082 bytes)
I0108 17:59:13.219746 71083 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/cert.pem (1123 bytes)
I0108 17:59:13.219778 71083 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/Users/jenkins/minikube-integration/17866-67452/.minikube/certs/key.pem (1679 bytes)
I0108 17:59:13.219836 71083 certs.go:437] found cert: /Users/jenkins/minikube-integration/17866-67452/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17866-67452/.minikube/files/etc/ssl/certs/678962.pem (1708 bytes)
I0108 17:59:13.220337 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0108 17:59:13.237651 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0108 17:59:13.253774 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0108 17:59:13.269775 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/profiles/cert-expiration-306000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0108 17:59:13.285669 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0108 17:59:13.301775 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0108 17:59:13.317863 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0108 17:59:13.333598 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0108 17:59:13.349651 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0108 17:59:13.365387 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/certs/67896.pem --> /usr/share/ca-certificates/67896.pem (1338 bytes)
I0108 17:59:13.381322 71083 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17866-67452/.minikube/files/etc/ssl/certs/678962.pem --> /usr/share/ca-certificates/678962.pem (1708 bytes)
I0108 17:59:13.397157 71083 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0108 17:59:13.408375 71083 ssh_runner.go:195] Run: openssl version
I0108 17:59:13.412007 71083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0108 17:59:13.418417 71083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0108 17:59:13.421502 71083 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 9 01:25 /usr/share/ca-certificates/minikubeCA.pem
I0108 17:59:13.421534 71083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0108 17:59:13.425197 71083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0108 17:59:13.431716 71083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67896.pem && ln -fs /usr/share/ca-certificates/67896.pem /etc/ssl/certs/67896.pem"
I0108 17:59:13.438231 71083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67896.pem
I0108 17:59:13.441304 71083 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 9 01:29 /usr/share/ca-certificates/67896.pem
I0108 17:59:13.441342 71083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67896.pem
I0108 17:59:13.444990 71083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/67896.pem /etc/ssl/certs/51391683.0"
I0108 17:59:13.451551 71083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/678962.pem && ln -fs /usr/share/ca-certificates/678962.pem /etc/ssl/certs/678962.pem"
I0108 17:59:13.458133 71083 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/678962.pem
I0108 17:59:13.461223 71083 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 9 01:29 /usr/share/ca-certificates/678962.pem
I0108 17:59:13.461251 71083 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/678962.pem
I0108 17:59:13.464840 71083 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/678962.pem /etc/ssl/certs/3ec20f2e.0"
I0108 17:59:13.471326 71083 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0108 17:59:13.474090 71083 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0108 17:59:13.474131 71083 kubeadm.go:404] StartCluster: {Name:cert-expiration-306000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:cert-expiration-306000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.169.0.157 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0108 17:59:13.474216 71083 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0108 17:59:13.486114 71083 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0108 17:59:13.492215 71083 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 17:59:13.497925 71083 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 17:59:13.503841 71083 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0108 17:59:13.503858 71083 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0108 17:59:13.562047 71083 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
I0108 17:59:13.562147 71083 kubeadm.go:322] [preflight] Running pre-flight checks
I0108 17:59:13.717443 71083 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0108 17:59:13.717527 71083 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0108 17:59:13.717609 71083 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0108 17:59:13.930929 71083 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0108 17:59:11.235158 70990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0108 17:59:11.385193 70990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0108 17:59:11.461281 70990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0108 17:59:11.525086 70990 api_server.go:52] waiting for apiserver process to appear ...
I0108 17:59:11.525153 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:12.025301 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:12.525351 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:13.026184 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:13.525754 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:14.025693 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:14.525510 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:15.025550 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:13.951382 71083 out.go:204] - Generating certificates and keys ...
I0108 17:59:13.951455 71083 kubeadm.go:322] [certs] Using existing ca certificate authority
I0108 17:59:13.951521 71083 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0108 17:59:13.988690 71083 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0108 17:59:14.488626 71083 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0108 17:59:14.749324 71083 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0108 17:59:14.933866 71083 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0108 17:59:15.238324 71083 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0108 17:59:15.238485 71083 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-306000 localhost] and IPs [192.169.0.157 127.0.0.1 ::1]
I0108 17:59:15.373597 71083 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0108 17:59:15.373719 71083 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-306000 localhost] and IPs [192.169.0.157 127.0.0.1 ::1]
I0108 17:59:15.683676 71083 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0108 17:59:15.806085 71083 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0108 17:59:16.008284 71083 kubeadm.go:322] [certs] Generating "sa" key and public key
I0108 17:59:16.008335 71083 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0108 17:59:16.092889 71083 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0108 17:59:16.224282 71083 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0108 17:59:16.654764 71083 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0108 17:59:16.820473 71083 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0108 17:59:16.820913 71083 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0108 17:59:16.822778 71083 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0108 17:59:16.844987 71083 out.go:204] - Booting up control plane ...
I0108 17:59:16.845057 71083 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0108 17:59:16.845114 71083 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0108 17:59:16.845160 71083 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0108 17:59:16.845233 71083 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0108 17:59:16.845317 71083 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0108 17:59:16.845354 71083 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0108 17:59:16.928266 71083 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0108 17:59:15.525508 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:16.025281 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:16.526003 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:17.025889 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:17.527351 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:18.025329 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:18.525521 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:19.026104 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:19.526613 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:20.025218 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:22.428289 71083 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502762 seconds
I0108 17:59:22.428382 71083 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0108 17:59:22.436204 71083 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0108 17:59:22.952615 71083 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0108 17:59:22.952757 71083 kubeadm.go:322] [mark-control-plane] Marking the node cert-expiration-306000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0108 17:59:23.460068 71083 kubeadm.go:322] [bootstrap-token] Using token: o4j706.2ke80nfgf3lgglak
I0108 17:59:23.497537 71083 out.go:204] - Configuring RBAC rules ...
I0108 17:59:23.497674 71083 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0108 17:59:23.501135 71083 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0108 17:59:23.541939 71083 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0108 17:59:23.544221 71083 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0108 17:59:23.547241 71083 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0108 17:59:23.551388 71083 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0108 17:59:23.559386 71083 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0108 17:59:23.725397 71083 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0108 17:59:23.905249 71083 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0108 17:59:23.905864 71083 kubeadm.go:322]
I0108 17:59:23.905911 71083 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0108 17:59:23.905914 71083 kubeadm.go:322]
I0108 17:59:23.905969 71083 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0108 17:59:23.905972 71083 kubeadm.go:322]
I0108 17:59:23.905996 71083 kubeadm.go:322] mkdir -p $HOME/.kube
I0108 17:59:23.906052 71083 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0108 17:59:23.906110 71083 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0108 17:59:23.906113 71083 kubeadm.go:322]
I0108 17:59:23.906152 71083 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0108 17:59:23.906154 71083 kubeadm.go:322]
I0108 17:59:23.906209 71083 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0108 17:59:23.906212 71083 kubeadm.go:322]
I0108 17:59:23.906258 71083 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0108 17:59:23.906313 71083 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0108 17:59:23.906365 71083 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0108 17:59:23.906372 71083 kubeadm.go:322]
I0108 17:59:23.906438 71083 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0108 17:59:23.906509 71083 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0108 17:59:23.906514 71083 kubeadm.go:322]
I0108 17:59:23.906573 71083 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o4j706.2ke80nfgf3lgglak \
I0108 17:59:23.906653 71083 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:cdae8860e88f68c1c48f6b03093d4cc8011d8454d3b1f746a55c569eef25216b \
I0108 17:59:23.906675 71083 kubeadm.go:322] --control-plane
I0108 17:59:23.906681 71083 kubeadm.go:322]
I0108 17:59:23.906751 71083 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0108 17:59:23.906754 71083 kubeadm.go:322]
I0108 17:59:23.906812 71083 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o4j706.2ke80nfgf3lgglak \
I0108 17:59:23.906898 71083 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:cdae8860e88f68c1c48f6b03093d4cc8011d8454d3b1f746a55c569eef25216b
I0108 17:59:23.907693 71083 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0108 17:59:23.907750 71083 cni.go:84] Creating CNI manager for ""
I0108 17:59:23.907763 71083 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0108 17:59:23.930794 71083 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0108 17:59:23.967715 71083 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0108 17:59:23.985473 71083 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0108 17:59:24.009439 71083 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0108 17:59:24.009502 71083 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0108 17:59:24.009503 71083 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c4ef52eca86898c65de92fcd28450f715088c13b minikube.k8s.io/name=cert-expiration-306000 minikube.k8s.io/updated_at=2024_01_08T17_59_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0108 17:59:24.163326 71083 kubeadm.go:1088] duration metric: took 153.878964ms to wait for elevateKubeSystemPrivileges.
I0108 17:59:24.163358 71083 ops.go:34] apiserver oom_adj: -16
I0108 17:59:24.163370 71083 kubeadm.go:406] StartCluster complete in 10.689296422s
I0108 17:59:24.163383 71083 settings.go:142] acquiring lock: {Name:mk22a5f9462f73418f11841246079d6aecaf3c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 17:59:24.163462 71083 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/17866-67452/kubeconfig
I0108 17:59:24.164157 71083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17866-67452/kubeconfig: {Name:mk6a4a9474aeb8f5c6f190376eba6d3bc844b1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 17:59:24.164402 71083 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0108 17:59:24.164441 71083 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
I0108 17:59:24.164484 71083 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-306000"
I0108 17:59:24.164486 71083 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-306000"
I0108 17:59:24.164501 71083 addons.go:237] Setting addon storage-provisioner=true in "cert-expiration-306000"
I0108 17:59:24.164501 71083 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-306000"
I0108 17:59:24.164553 71083 host.go:66] Checking if "cert-expiration-306000" exists ...
I0108 17:59:24.164553 71083 config.go:182] Loaded profile config "cert-expiration-306000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 17:59:24.164805 71083 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:24.164827 71083 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0108 17:59:24.164852 71083 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:24.164864 71083 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0108 17:59:24.173835 71083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57602
I0108 17:59:24.174157 71083 main.go:141] libmachine: () Calling .GetVersion
I0108 17:59:24.174501 71083 main.go:141] libmachine: Using API Version 1
I0108 17:59:24.174511 71083 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 17:59:24.174549 71083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57604
I0108 17:59:24.174747 71083 main.go:141] libmachine: () Calling .GetMachineName
I0108 17:59:24.174831 71083 main.go:141] libmachine: () Calling .GetVersion
I0108 17:59:24.175155 71083 main.go:141] libmachine: Using API Version 1
I0108 17:59:24.175162 71083 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 17:59:24.175177 71083 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:24.175199 71083 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0108 17:59:24.175965 71083 main.go:141] libmachine: () Calling .GetMachineName
I0108 17:59:24.176213 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetState
I0108 17:59:24.176358 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 17:59:24.176386 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | hyperkit pid from json: 70842
I0108 17:59:24.178690 71083 addons.go:237] Setting addon default-storageclass=true in "cert-expiration-306000"
I0108 17:59:24.178710 71083 host.go:66] Checking if "cert-expiration-306000" exists ...
I0108 17:59:24.178963 71083 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:24.178988 71083 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0108 17:59:24.184092 71083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57606
I0108 17:59:24.184441 71083 main.go:141] libmachine: () Calling .GetVersion
I0108 17:59:24.184767 71083 main.go:141] libmachine: Using API Version 1
I0108 17:59:24.184773 71083 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 17:59:24.184965 71083 main.go:141] libmachine: () Calling .GetMachineName
I0108 17:59:24.185067 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetState
I0108 17:59:24.185148 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 17:59:24.185213 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | hyperkit pid from json: 70842
I0108 17:59:24.186332 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:24.223566 71083 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0108 17:59:24.187551 71083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57608
I0108 17:59:24.223991 71083 main.go:141] libmachine: () Calling .GetVersion
I0108 17:59:24.242499 71083 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.169.0.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0108 17:59:24.245171 71083 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0108 17:59:24.245176 71083 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0108 17:59:24.245187 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:24.245321 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:24.245437 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:24.245508 71083 main.go:141] libmachine: Using API Version 1
I0108 17:59:24.245519 71083 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 17:59:24.245537 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:24.245646 71083 sshutil.go:53] new ssh client: &{IP:192.169.0.157 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/machines/cert-expiration-306000/id_rsa Username:docker}
I0108 17:59:24.245742 71083 main.go:141] libmachine: () Calling .GetMachineName
I0108 17:59:24.246077 71083 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 17:59:24.246097 71083 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0108 17:59:24.254472 71083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57611
I0108 17:59:24.254811 71083 main.go:141] libmachine: () Calling .GetVersion
I0108 17:59:24.255170 71083 main.go:141] libmachine: Using API Version 1
I0108 17:59:24.255180 71083 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 17:59:24.255378 71083 main.go:141] libmachine: () Calling .GetMachineName
I0108 17:59:24.255476 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetState
I0108 17:59:24.255552 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 17:59:24.255626 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | hyperkit pid from json: 70842
I0108 17:59:24.256723 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .DriverName
I0108 17:59:24.256869 71083 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
I0108 17:59:24.256874 71083 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0108 17:59:24.256884 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHHostname
I0108 17:59:24.256963 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHPort
I0108 17:59:24.257034 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHKeyPath
I0108 17:59:24.257104 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .GetSSHUsername
I0108 17:59:24.257169 71083 sshutil.go:53] new ssh client: &{IP:192.169.0.157 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17866-67452/.minikube/machines/cert-expiration-306000/id_rsa Username:docker}
I0108 17:59:24.319193 71083 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0108 17:59:24.331490 71083 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0108 17:59:24.671006 71083 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-306000" context rescaled to 1 replicas
I0108 17:59:24.671026 71083 start.go:223] Will wait 6m0s for node &{Name: IP:192.169.0.157 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 17:59:24.694638 71083 out.go:177] * Verifying Kubernetes components...
I0108 17:59:24.737237 71083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 17:59:25.079994 71083 start.go:929] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
I0108 17:59:25.189196 71083 main.go:141] libmachine: Making call to close driver server
I0108 17:59:25.189208 71083 main.go:141] libmachine: Making call to close driver server
I0108 17:59:25.189214 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .Close
I0108 17:59:25.189240 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .Close
I0108 17:59:25.189382 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | Closing plugin on server side
I0108 17:59:25.189403 71083 main.go:141] libmachine: Successfully made call to close driver server
I0108 17:59:25.189418 71083 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 17:59:25.189425 71083 main.go:141] libmachine: Making call to close driver server
I0108 17:59:25.189423 71083 main.go:141] libmachine: Successfully made call to close driver server
I0108 17:59:25.189429 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .Close
I0108 17:59:25.189428 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | Closing plugin on server side
I0108 17:59:25.189434 71083 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 17:59:25.189441 71083 main.go:141] libmachine: Making call to close driver server
I0108 17:59:25.189445 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .Close
I0108 17:59:25.189585 71083 main.go:141] libmachine: Successfully made call to close driver server
I0108 17:59:25.189585 71083 main.go:141] libmachine: Successfully made call to close driver server
I0108 17:59:25.189591 71083 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 17:59:25.189592 71083 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 17:59:25.189615 71083 main.go:141] libmachine: (cert-expiration-306000) DBG | Closing plugin on server side
I0108 17:59:25.190224 71083 api_server.go:52] waiting for apiserver process to appear ...
I0108 17:59:25.190272 71083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:25.193291 71083 main.go:141] libmachine: Making call to close driver server
I0108 17:59:25.193298 71083 main.go:141] libmachine: (cert-expiration-306000) Calling .Close
I0108 17:59:25.193439 71083 main.go:141] libmachine: Successfully made call to close driver server
I0108 17:59:25.193444 71083 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 17:59:25.215970 71083 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0108 17:59:25.199913 71083 api_server.go:72] duration metric: took 528.875346ms to wait for apiserver process to appear ...
I0108 17:59:25.258076 71083 api_server.go:88] waiting for apiserver healthz status ...
I0108 17:59:25.258070 71083 addons.go:508] enable addons completed in 1.093644806s: enabled=[storage-provisioner default-storageclass]
I0108 17:59:25.258098 71083 api_server.go:253] Checking apiserver healthz at https://192.169.0.157:8443/healthz ...
I0108 17:59:25.263208 71083 api_server.go:279] https://192.169.0.157:8443/healthz returned 200:
ok
I0108 17:59:25.264279 71083 api_server.go:141] control plane version: v1.28.4
I0108 17:59:25.264289 71083 api_server.go:131] duration metric: took 6.207696ms to wait for apiserver health ...
I0108 17:59:25.264300 71083 system_pods.go:43] waiting for kube-system pods to appear ...
I0108 17:59:25.268670 71083 system_pods.go:59] 5 kube-system pods found
I0108 17:59:25.268679 71083 system_pods.go:61] "etcd-cert-expiration-306000" [c95ff337-98dd-414b-b820-30974ea4f11b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0108 17:59:25.268683 71083 system_pods.go:61] "kube-apiserver-cert-expiration-306000" [89e0757f-2f29-4a15-9570-5d4827d27839] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0108 17:59:25.268690 71083 system_pods.go:61] "kube-controller-manager-cert-expiration-306000" [d1ec9efd-b7d0-4595-975c-6eef816bd51e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0108 17:59:25.268702 71083 system_pods.go:61] "kube-scheduler-cert-expiration-306000" [c05c0a33-4c03-4b0f-b358-76c70cef5e18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0108 17:59:25.268705 71083 system_pods.go:61] "storage-provisioner" [363b8e68-26cd-464b-b005-bbb484af3563] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
I0108 17:59:25.268709 71083 system_pods.go:74] duration metric: took 4.406313ms to wait for pod list to return data ...
I0108 17:59:25.268716 71083 kubeadm.go:581] duration metric: took 597.680956ms to wait for : map[apiserver:true system_pods:true] ...
I0108 17:59:25.268724 71083 node_conditions.go:102] verifying NodePressure condition ...
I0108 17:59:25.270761 71083 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0108 17:59:25.270774 71083 node_conditions.go:123] node cpu capacity is 2
I0108 17:59:25.270782 71083 node_conditions.go:105] duration metric: took 2.055897ms to run NodePressure ...
I0108 17:59:25.270788 71083 start.go:228] waiting for startup goroutines ...
I0108 17:59:25.270791 71083 start.go:233] waiting for cluster config update ...
I0108 17:59:25.270798 71083 start.go:242] writing updated cluster config ...
I0108 17:59:25.271085 71083 ssh_runner.go:195] Run: rm -f paused
I0108 17:59:25.310875 71083 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
I0108 17:59:25.332052 71083 out.go:177] * Done! kubectl is now configured to use "cert-expiration-306000" cluster and "default" namespace by default
I0108 17:59:20.525974 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:21.025919 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:21.525641 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:22.025907 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:22.526051 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:23.026448 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:23.526595 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:24.026033 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:24.525482 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 17:59:25.026103 70990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
==> Docker <==
-- Journal begins at Tue 2024-01-09 01:55:56 UTC, ends at Tue 2024-01-09 01:59:26 UTC. --
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.162931786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.174412303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.174540700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.174577577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.174600289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 cri-dockerd[1431]: time="2024-01-09T01:59:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/39141150c3dde6731a768268de3c3c03c950b49da980d1c5926b7e057daea570/resolv.conf as [nameserver 192.169.0.1]"
Jan 09 01:59:18 cert-expiration-306000 cri-dockerd[1431]: time="2024-01-09T01:59:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1f688ce21a0efc449344809044009b2fd8c412db0e4f9dc8f44eead3f9f5dfd6/resolv.conf as [nameserver 192.169.0.1]"
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.742072018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.742116138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.742195830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.742211931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 cri-dockerd[1431]: time="2024-01-09T01:59:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d398bcbb4ca26645e03b630ccb65bd1ec52fd75a2ad23c7e82a5da55b39ba17b/resolv.conf as [nameserver 192.169.0.1]"
Jan 09 01:59:18 cert-expiration-306000 cri-dockerd[1431]: time="2024-01-09T01:59:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/548bc03e8dabfc30cc12226fbcfbf0a6c28a32c4929bb24368f839a47bce9f23/resolv.conf as [nameserver 192.169.0.1]"
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.833301547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.833411648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.833422174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.833431181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.873460658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.877552230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.877723898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.877793891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.877916542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.888649843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.888821759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 09 01:59:18 cert-expiration-306000 dockerd[1548]: time="2024-01-09T01:59:18.888883827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
a21bafcb4f0d8 73deb9a3f7025 8 seconds ago Running etcd 0 548bc03e8dabf etcd-cert-expiration-306000
b10f33c8f36e9 e3db313c6dbc0 8 seconds ago Running kube-scheduler 0 d398bcbb4ca26 kube-scheduler-cert-expiration-306000
f4fbd3e400826 d058aa5ab969c 8 seconds ago Running kube-controller-manager 0 1f688ce21a0ef kube-controller-manager-cert-expiration-306000
12b916cc1176c 7fe0e6f37db33 8 seconds ago Running kube-apiserver 0 39141150c3dde kube-apiserver-cert-expiration-306000
==> describe nodes <==
Name: cert-expiration-306000
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=cert-expiration-306000
kubernetes.io/os=linux
minikube.k8s.io/commit=c4ef52eca86898c65de92fcd28450f715088c13b
minikube.k8s.io/name=cert-expiration-306000
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_01_08T17_59_24_0700
minikube.k8s.io/version=v1.32.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 09 Jan 2024 01:59:21 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: cert-expiration-306000
AcquireTime: <unset>
RenewTime: Tue, 09 Jan 2024 01:59:23 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 09 Jan 2024 01:59:24 +0000 Tue, 09 Jan 2024 01:59:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 09 Jan 2024 01:59:24 +0000 Tue, 09 Jan 2024 01:59:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 09 Jan 2024 01:59:24 +0000 Tue, 09 Jan 2024 01:59:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Tue, 09 Jan 2024 01:59:24 +0000 Tue, 09 Jan 2024 01:59:19 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 192.169.0.157
Hostname: cert-expiration-306000
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
System Info:
Machine ID: 277dfe88ad904d16a044a7b9fb220071
System UUID: 35e111ee-0000-0000-a96a-f01898ef957c
Boot ID: 088c4846-5e43-46cb-b09c-d519ccb7f6d5
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.7
Kubelet Version: v1.28.4
Kube-Proxy Version: v1.28.4
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-cert-expiration-306000 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 2s
kube-system kube-apiserver-cert-expiration-306000 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
kube-system kube-controller-manager-cert-expiration-306000 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
kube-system kube-scheduler-cert-expiration-306000 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (32%!)(MISSING) 0 (0%!)(MISSING)
memory 100Mi (5%!)(MISSING) 0 (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2s kubelet Node cert-expiration-306000 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2s kubelet Node cert-expiration-306000 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2s kubelet Node cert-expiration-306000 status is now: NodeHasSufficientPID
==> dmesg <==
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.845977] systemd-fstab-generator[535]: Ignoring "noauto" for root device
[ +0.096618] systemd-fstab-generator[546]: Ignoring "noauto" for root device
[Jan 9 01:56] systemd-fstab-generator[732]: Ignoring "noauto" for root device
[ +0.220725] systemd-fstab-generator[772]: Ignoring "noauto" for root device
[ +0.091698] systemd-fstab-generator[783]: Ignoring "noauto" for root device
[ +0.097485] systemd-fstab-generator[796]: Ignoring "noauto" for root device
[ +1.410549] systemd-fstab-generator[956]: Ignoring "noauto" for root device
[ +0.103054] systemd-fstab-generator[967]: Ignoring "noauto" for root device
[ +0.095965] systemd-fstab-generator[978]: Ignoring "noauto" for root device
[ +0.098269] systemd-fstab-generator[989]: Ignoring "noauto" for root device
[Jan 9 01:59] systemd-fstab-generator[1126]: Ignoring "noauto" for root device
[ +0.182075] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
[ +0.082081] systemd-fstab-generator[1172]: Ignoring "noauto" for root device
[ +0.103038] systemd-fstab-generator[1185]: Ignoring "noauto" for root device
[ +1.188443] kauditd_printk_skb: 55 callbacks suppressed
[ +0.159230] systemd-fstab-generator[1344]: Ignoring "noauto" for root device
[ +0.088404] systemd-fstab-generator[1355]: Ignoring "noauto" for root device
[ +0.092965] systemd-fstab-generator[1366]: Ignoring "noauto" for root device
[ +0.097511] systemd-fstab-generator[1377]: Ignoring "noauto" for root device
[ +0.103834] systemd-fstab-generator[1396]: Ignoring "noauto" for root device
[ +4.427204] systemd-fstab-generator[1532]: Ignoring "noauto" for root device
[ +1.605717] kauditd_printk_skb: 29 callbacks suppressed
[ +4.196453] systemd-fstab-generator[1912]: Ignoring "noauto" for root device
[ +6.732385] systemd-fstab-generator[2812]: Ignoring "noauto" for root device
==> etcd [a21bafcb4f0d] <==
{"level":"info","ts":"2024-01-09T01:59:19.126588Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.157:2380"}
{"level":"info","ts":"2024-01-09T01:59:19.127267Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.157:2380"}
{"level":"info","ts":"2024-01-09T01:59:19.126632Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2024-01-09T01:59:19.127473Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-01-09T01:59:19.12767Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-01-09T01:59:19.127249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ff84b2db2fc655c switched to configuration voters=(3456595373655352668)"}
{"level":"info","ts":"2024-01-09T01:59:19.127965Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"224b9484be6dd374","local-member-id":"2ff84b2db2fc655c","added-peer-id":"2ff84b2db2fc655c","added-peer-peer-urls":["https://192.169.0.157:2380"]}
{"level":"info","ts":"2024-01-09T01:59:20.098138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ff84b2db2fc655c is starting a new election at term 1"}
{"level":"info","ts":"2024-01-09T01:59:20.098165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ff84b2db2fc655c became pre-candidate at term 1"}
{"level":"info","ts":"2024-01-09T01:59:20.098181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ff84b2db2fc655c received MsgPreVoteResp from 2ff84b2db2fc655c at term 1"}
{"level":"info","ts":"2024-01-09T01:59:20.098191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ff84b2db2fc655c became candidate at term 2"}
{"level":"info","ts":"2024-01-09T01:59:20.098195Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ff84b2db2fc655c received MsgVoteResp from 2ff84b2db2fc655c at term 2"}
{"level":"info","ts":"2024-01-09T01:59:20.098202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2ff84b2db2fc655c became leader at term 2"}
{"level":"info","ts":"2024-01-09T01:59:20.098208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2ff84b2db2fc655c elected leader 2ff84b2db2fc655c at term 2"}
{"level":"info","ts":"2024-01-09T01:59:20.098982Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2ff84b2db2fc655c","local-member-attributes":"{Name:cert-expiration-306000 ClientURLs:[https://192.169.0.157:2379]}","request-path":"/0/members/2ff84b2db2fc655c/attributes","cluster-id":"224b9484be6dd374","publish-timeout":"7s"}
{"level":"info","ts":"2024-01-09T01:59:20.099116Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-01-09T01:59:20.099234Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-01-09T01:59:20.100512Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-01-09T01:59:20.100598Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-01-09T01:59:20.100709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-01-09T01:59:20.100718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-01-09T01:59:20.100773Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"224b9484be6dd374","local-member-id":"2ff84b2db2fc655c","cluster-version":"3.5"}
{"level":"info","ts":"2024-01-09T01:59:20.100824Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-01-09T01:59:20.100835Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-01-09T01:59:20.113016Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.157:2379"}
==> kernel <==
01:59:26 up 3 min, 0 users, load average: 0.72, 0.18, 0.05
Linux cert-expiration-306000 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
==> kube-apiserver [12b916cc1176] <==
I0109 01:59:21.181340 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0109 01:59:21.184430 1 shared_informer.go:318] Caches are synced for crd-autoregister
I0109 01:59:21.184464 1 aggregator.go:166] initial CRD sync complete...
I0109 01:59:21.184469 1 autoregister_controller.go:141] Starting autoregister controller
I0109 01:59:21.184474 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0109 01:59:21.184478 1 cache.go:39] Caches are synced for autoregister controller
I0109 01:59:21.186650 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I0109 01:59:21.188495 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0109 01:59:21.188529 1 shared_informer.go:318] Caches are synced for configmaps
I0109 01:59:21.192001 1 apf_controller.go:377] Running API Priority and Fairness config worker
I0109 01:59:21.192029 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I0109 01:59:21.213487 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I0109 01:59:22.084410 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0109 01:59:22.087862 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0109 01:59:22.087870 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0109 01:59:22.404523 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0109 01:59:22.450074 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0109 01:59:22.489806 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0109 01:59:22.494438 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.169.0.157]
I0109 01:59:22.495001 1 controller.go:624] quota admission added evaluator for: endpoints
I0109 01:59:22.498268 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0109 01:59:23.169670 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0109 01:59:23.833451 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0109 01:59:23.839941 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0109 01:59:23.849873 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
==> kube-controller-manager [f4fbd3e40082] <==
I0109 01:59:23.568904 1 controllermanager.go:642] "Started controller" controller="ttl-controller"
I0109 01:59:23.568933 1 ttl_controller.go:124] "Starting TTL controller"
I0109 01:59:23.568938 1 shared_informer.go:311] Waiting for caches to sync for TTL
I0109 01:59:23.717003 1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
I0109 01:59:23.717062 1 pvc_protection_controller.go:102] "Starting PVC protection controller"
I0109 01:59:23.717069 1 shared_informer.go:311] Waiting for caches to sync for PVC protection
I0109 01:59:23.867626 1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
I0109 01:59:23.867690 1 daemon_controller.go:291] "Starting daemon sets controller"
I0109 01:59:23.867696 1 shared_informer.go:311] Waiting for caches to sync for daemon sets
I0109 01:59:24.017779 1 controllermanager.go:642] "Started controller" controller="job-controller"
I0109 01:59:24.017796 1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
I0109 01:59:24.017801 1 controllermanager.go:620] "Warning: skipping controller" controller="node-route-controller"
I0109 01:59:24.017835 1 job_controller.go:226] "Starting job controller"
I0109 01:59:24.017840 1 shared_informer.go:311] Waiting for caches to sync for job
E0109 01:59:24.168688 1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
I0109 01:59:24.168722 1 controllermanager.go:620] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
I0109 01:59:24.340684 1 controllermanager.go:642] "Started controller" controller="persistentvolume-protection-controller"
I0109 01:59:24.340743 1 pv_protection_controller.go:78] "Starting PV protection controller"
I0109 01:59:24.340765 1 shared_informer.go:311] Waiting for caches to sync for PV protection
I0109 01:59:24.467335 1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
I0109 01:59:24.467422 1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
I0109 01:59:24.646297 1 controllermanager.go:642] "Started controller" controller="token-cleaner-controller"
I0109 01:59:24.646335 1 tokencleaner.go:112] "Starting token cleaner controller"
I0109 01:59:24.646341 1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
I0109 01:59:24.646346 1 shared_informer.go:318] Caches are synced for token_cleaner
==> kube-scheduler [b10f33c8f36e] <==
W0109 01:59:21.174750 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0109 01:59:21.175645 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0109 01:59:21.174914 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0109 01:59:21.175653 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0109 01:59:21.174940 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0109 01:59:21.175666 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0109 01:59:21.175075 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0109 01:59:21.175823 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0109 01:59:21.174354 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0109 01:59:21.175833 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0109 01:59:22.002450 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0109 01:59:22.002473 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0109 01:59:22.052666 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0109 01:59:22.052703 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0109 01:59:22.074070 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0109 01:59:22.074085 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0109 01:59:22.090301 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0109 01:59:22.090339 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0109 01:59:22.216102 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0109 01:59:22.216266 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0109 01:59:22.242482 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0109 01:59:22.242641 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0109 01:59:22.336459 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0109 01:59:22.336730 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0109 01:59:24.167002 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Journal begins at Tue 2024-01-09 01:55:56 UTC, ends at Tue 2024-01-09 01:59:27 UTC. --
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.038408 2832 topology_manager.go:215] "Topology Admit Handler" podUID="26818365c2a20327ae7647ac4e59937d" podNamespace="kube-system" podName="kube-apiserver-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.038480 2832 topology_manager.go:215] "Topology Admit Handler" podUID="b5d239375a06eec28517c9d851da5d81" podNamespace="kube-system" podName="kube-controller-manager-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.038585 2832 topology_manager.go:215] "Topology Admit Handler" podUID="0b79fe6262984a3e38fa77f3cd263092" podNamespace="kube-system" podName="kube-scheduler-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.048008 2832 kubelet_node_status.go:70] "Attempting to register node" node="cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.091714 2832 kubelet_node_status.go:108] "Node was previously registered" node="cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.091768 2832 kubelet_node_status.go:73] "Successfully registered node" node="cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236571 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5d239375a06eec28517c9d851da5d81-usr-share-ca-certificates\") pod \"kube-controller-manager-cert-expiration-306000\" (UID: \"b5d239375a06eec28517c9d851da5d81\") " pod="kube-system/kube-controller-manager-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236633 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b79fe6262984a3e38fa77f3cd263092-kubeconfig\") pod \"kube-scheduler-cert-expiration-306000\" (UID: \"0b79fe6262984a3e38fa77f3cd263092\") " pod="kube-system/kube-scheduler-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236654 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/7897af33683f352d42791240b542086e-etcd-data\") pod \"etcd-cert-expiration-306000\" (UID: \"7897af33683f352d42791240b542086e\") " pod="kube-system/etcd-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236669 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26818365c2a20327ae7647ac4e59937d-ca-certs\") pod \"kube-apiserver-cert-expiration-306000\" (UID: \"26818365c2a20327ae7647ac4e59937d\") " pod="kube-system/kube-apiserver-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236687 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5d239375a06eec28517c9d851da5d81-ca-certs\") pod \"kube-controller-manager-cert-expiration-306000\" (UID: \"b5d239375a06eec28517c9d851da5d81\") " pod="kube-system/kube-controller-manager-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236702 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5d239375a06eec28517c9d851da5d81-k8s-certs\") pod \"kube-controller-manager-cert-expiration-306000\" (UID: \"b5d239375a06eec28517c9d851da5d81\") " pod="kube-system/kube-controller-manager-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236714 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5d239375a06eec28517c9d851da5d81-kubeconfig\") pod \"kube-controller-manager-cert-expiration-306000\" (UID: \"b5d239375a06eec28517c9d851da5d81\") " pod="kube-system/kube-controller-manager-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236762 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/7897af33683f352d42791240b542086e-etcd-certs\") pod \"etcd-cert-expiration-306000\" (UID: \"7897af33683f352d42791240b542086e\") " pod="kube-system/etcd-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236782 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26818365c2a20327ae7647ac4e59937d-k8s-certs\") pod \"kube-apiserver-cert-expiration-306000\" (UID: \"26818365c2a20327ae7647ac4e59937d\") " pod="kube-system/kube-apiserver-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236800 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26818365c2a20327ae7647ac4e59937d-usr-share-ca-certificates\") pod \"kube-apiserver-cert-expiration-306000\" (UID: \"26818365c2a20327ae7647ac4e59937d\") " pod="kube-system/kube-apiserver-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.236816 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b5d239375a06eec28517c9d851da5d81-flexvolume-dir\") pod \"kube-controller-manager-cert-expiration-306000\" (UID: \"b5d239375a06eec28517c9d851da5d81\") " pod="kube-system/kube-controller-manager-cert-expiration-306000"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.924757 2832 apiserver.go:52] "Watching apiserver"
Jan 09 01:59:24 cert-expiration-306000 kubelet[2832]: I0109 01:59:24.934208 2832 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Jan 09 01:59:25 cert-expiration-306000 kubelet[2832]: E0109 01:59:25.013549 2832 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cert-expiration-306000\" already exists" pod="kube-system/kube-apiserver-cert-expiration-306000"
Jan 09 01:59:25 cert-expiration-306000 kubelet[2832]: I0109 01:59:25.030290 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-cert-expiration-306000" podStartSLOduration=1.030258203 podCreationTimestamp="2024-01-09 01:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 01:59:25.024299268 +0000 UTC m=+1.205874089" watchObservedRunningTime="2024-01-09 01:59:25.030258203 +0000 UTC m=+1.211833014"
Jan 09 01:59:25 cert-expiration-306000 kubelet[2832]: I0109 01:59:25.037438 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-cert-expiration-306000" podStartSLOduration=1.03741415 podCreationTimestamp="2024-01-09 01:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 01:59:25.030573825 +0000 UTC m=+1.212148646" watchObservedRunningTime="2024-01-09 01:59:25.03741415 +0000 UTC m=+1.218988970"
Jan 09 01:59:25 cert-expiration-306000 kubelet[2832]: I0109 01:59:25.043721 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-cert-expiration-306000" podStartSLOduration=1.043696838 podCreationTimestamp="2024-01-09 01:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 01:59:25.037712498 +0000 UTC m=+1.219287319" watchObservedRunningTime="2024-01-09 01:59:25.043696838 +0000 UTC m=+1.225271654"
Jan 09 01:59:25 cert-expiration-306000 kubelet[2832]: I0109 01:59:25.050453 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-cert-expiration-306000" podStartSLOduration=1.050426614 podCreationTimestamp="2024-01-09 01:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-09 01:59:25.044041926 +0000 UTC m=+1.225616747" watchObservedRunningTime="2024-01-09 01:59:25.050426614 +0000 UTC m=+1.232001435"
Jan 09 01:59:26 cert-expiration-306000 kubelet[2832]: I0109 01:59:26.940988 2832 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p cert-expiration-306000 -n cert-expiration-306000
helpers_test.go:261: (dbg) Run: kubectl --context cert-expiration-306000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestCertExpiration]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context cert-expiration-306000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context cert-expiration-306000 describe pod storage-provisioner: exit status 1 (50.576843ms)
** stderr **
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:279: kubectl --context cert-expiration-306000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "cert-expiration-306000" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-darwin-amd64 delete -p cert-expiration-306000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-306000: (3.428841402s)
--- FAIL: TestCertExpiration (222.84s)