=== RUN TestCertExpiration
=== PAUSE TestCertExpiration
=== CONT TestCertExpiration
cert_options_test.go:123: (dbg) Run: out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=3m --driver=hyperkit
=== CONT TestCertExpiration
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 90 (20.327867131s)
-- stdout --
* [cert-expiration-729000] minikube v1.28.0 on Darwin 13.2
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on user configuration
* Starting control plane node cert-expiration-729000 in cluster cert-expiration-729000
* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
-- /stdout --
** stderr **
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 90
E0127 20:02:47.091071 4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
=== CONT TestCertExpiration
cert_options_test.go:131: (dbg) Run: out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=8760h --driver=hyperkit
E0127 20:05:31.896145 4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:52.377437 4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
=== CONT TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (23.949517656s)
cert_options_test.go:136: minikube start output did not warn about expired certs:
-- stdout --
* [cert-expiration-729000] minikube v1.28.0 on Darwin 13.2
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on existing profile
* Starting control plane node cert-expiration-729000 in cluster cert-expiration-729000
* Updating the running hyperkit "cert-expiration-729000" VM ...
* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "cert-expiration-729000" cluster and "default" namespace by default
-- /stdout --
** stderr **
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-01-27 20:05:55.322302 -0800 PST m=+2143.754742106
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-729000 -n cert-expiration-729000
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p cert-expiration-729000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p cert-expiration-729000 logs -n 25: (2.037446667s)
helpers_test.go:252: TestCertExpiration logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-035000 sudo | cilium-035000 | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | |
| | containerd config dump | | | | | |
| ssh | -p cilium-035000 sudo | cilium-035000 | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-035000 sudo | cilium-035000 | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-035000 sudo find | cilium-035000 | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-035000 sudo crio | cilium-035000 | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | |
| | config | | | | | |
| delete | -p cilium-035000 | cilium-035000 | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | 27 Jan 23 20:00 PST |
| start | -p force-systemd-env-631000 | force-systemd-env-631000 | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | 27 Jan 23 20:01 PST |
| | --memory=2048 | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p offline-docker-310000 | offline-docker-310000 | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:01 PST |
| start | -p force-systemd-flag-814000 | force-systemd-flag-814000 | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:02 PST |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | force-systemd-env-631000 | force-systemd-env-631000 | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:01 PST |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-env-631000 | force-systemd-env-631000 | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:01 PST |
| start | -p docker-flags-643000 | docker-flags-643000 | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:02 PST |
| | --cache-images=false | | | | | |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=false | | | | | |
| | --docker-env=FOO=BAR | | | | | |
| | --docker-env=BAZ=BAT | | | | | |
| | --docker-opt=debug | | | | | |
| | --docker-opt=icc=true | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | force-systemd-flag-814000 | force-systemd-flag-814000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-flag-814000 | force-systemd-flag-814000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
| start | -p cert-expiration-729000 | cert-expiration-729000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | docker-flags-643000 ssh | docker-flags-643000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-643000 ssh | docker-flags-643000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-643000 | docker-flags-643000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
| start | -p cert-options-460000 | cert-options-460000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:03 PST |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | cert-options-460000 ssh | cert-options-460000 | jenkins | v1.28.0 | 27 Jan 23 20:03 PST | 27 Jan 23 20:03 PST |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-460000 -- sudo | cert-options-460000 | jenkins | v1.28.0 | 27 Jan 23 20:03 PST | 27 Jan 23 20:03 PST |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-460000 | cert-options-460000 | jenkins | v1.28.0 | 27 Jan 23 20:03 PST | 27 Jan 23 20:03 PST |
| start | -p running-upgrade-052000 | running-upgrade-052000 | jenkins | v1.28.0 | 27 Jan 23 20:04 PST | 27 Jan 23 20:05 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p cert-expiration-729000 | cert-expiration-729000 | jenkins | v1.28.0 | 27 Jan 23 20:05 PST | 27 Jan 23 20:05 PST |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p running-upgrade-052000 | running-upgrade-052000 | jenkins | v1.28.0 | 27 Jan 23 20:05 PST | |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/27 20:05:31
Running on machine: MacOS-Agent-4
Binary: Built with gc go1.19.5 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 20:05:31.430413 10037 out.go:296] Setting OutFile to fd 1 ...
I0127 20:05:31.430567 10037 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0127 20:05:31.430571 10037 out.go:309] Setting ErrFile to fd 2...
I0127 20:05:31.430574 10037 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0127 20:05:31.430683 10037 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
I0127 20:05:31.431177 10037 out.go:303] Setting JSON to false
I0127 20:05:31.449709 10037 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3906,"bootTime":1674874825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0127 20:05:31.449793 10037 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0127 20:05:31.479973 10037 out.go:177] * [cert-expiration-729000] minikube v1.28.0 on Darwin 13.2
I0127 20:05:31.521792 10037 notify.go:220] Checking for updates...
I0127 20:05:31.543564 10037 out.go:177] - MINIKUBE_LOCATION=15565
I0127 20:05:31.564703 10037 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
I0127 20:05:31.585754 10037 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0127 20:05:31.606755 10037 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 20:05:31.628580 10037 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
I0127 20:05:31.649793 10037 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 20:05:31.672338 10037 config.go:180] Loaded profile config "cert-expiration-729000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0127 20:05:31.673032 10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:31.673106 10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:31.680966 10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52581
I0127 20:05:31.681375 10037 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:31.681771 10037 main.go:141] libmachine: Using API Version 1
I0127 20:05:31.681778 10037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:31.681974 10037 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:31.682080 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:31.682214 10037 driver.go:365] Setting default libvirt URI to qemu:///system
I0127 20:05:31.682465 10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:31.682486 10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:31.689207 10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52583
I0127 20:05:31.689553 10037 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:31.689882 10037 main.go:141] libmachine: Using API Version 1
I0127 20:05:31.689890 10037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:31.690076 10037 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:31.690170 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:31.717678 10037 out.go:177] * Using the hyperkit driver based on existing profile
I0127 20:05:31.759488 10037 start.go:296] selected driver: hyperkit
I0127 20:05:31.759569 10037 start.go:840] validating driver "hyperkit" against &{Name:cert-expiration-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:cert-expiration-729000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0127 20:05:31.759743 10037 start.go:851] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 20:05:31.763859 10037 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 20:05:31.763993 10037 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0127 20:05:31.771086 10037 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
I0127 20:05:31.774339 10037 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:31.774351 10037 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0127 20:05:31.774424 10037 cni.go:84] Creating CNI manager for ""
I0127 20:05:31.774436 10037 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0127 20:05:31.774446 10037 start_flags.go:319] config:
{Name:cert-expiration-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:cert-expiration-729000 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0127 20:05:31.774576 10037 iso.go:125] acquiring lock: {Name:mkeeb6f52f7fa0577f04180383dbb7ed67f33d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 20:05:31.816479 10037 out.go:177] * Starting control plane node cert-expiration-729000 in cluster cert-expiration-729000
I0127 20:05:31.837589 10037 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0127 20:05:31.837665 10037 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0127 20:05:31.837689 10037 cache.go:57] Caching tarball of preloaded images
I0127 20:05:31.837881 10037 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 20:05:31.837894 10037 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0127 20:05:31.838032 10037 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/config.json ...
I0127 20:05:31.838906 10037 cache.go:193] Successfully downloaded all kic artifacts
I0127 20:05:31.838951 10037 start.go:364] acquiring machines lock for cert-expiration-729000: {Name:mk69c04a34b14d26e3f74e414bcb566a33d5b215 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 20:05:31.839050 10037 start.go:368] acquired machines lock for "cert-expiration-729000" in 83.928µs
I0127 20:05:31.839090 10037 start.go:96] Skipping create...Using existing machine configuration
I0127 20:05:31.839104 10037 fix.go:55] fixHost starting:
I0127 20:05:31.839540 10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:31.839565 10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:31.847119 10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52585
I0127 20:05:31.847481 10037 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:31.847863 10037 main.go:141] libmachine: Using API Version 1
I0127 20:05:31.847877 10037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:31.848074 10037 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:31.848187 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:31.848279 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetState
I0127 20:05:31.848371 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0127 20:05:31.848446 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | hyperkit pid from json: 9398
I0127 20:05:31.849313 10037 fix.go:103] recreateIfNeeded on cert-expiration-729000: state=Running err=<nil>
W0127 20:05:31.849324 10037 fix.go:129] unexpected machine state, will restart: <nil>
I0127 20:05:31.891634 10037 out.go:177] * Updating the running hyperkit "cert-expiration-729000" VM ...
I0127 20:05:27.314666 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:27.814670 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:28.312595 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:28.814565 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:29.313942 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:29.813568 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:30.314556 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:30.814517 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:31.312771 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:31.813046 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:31.912880 10037 machine.go:88] provisioning docker machine ...
I0127 20:05:31.912947 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:31.913259 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetMachineName
I0127 20:05:31.913480 10037 buildroot.go:166] provisioning hostname "cert-expiration-729000"
I0127 20:05:31.913497 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetMachineName
I0127 20:05:31.913693 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:31.913864 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:31.914051 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:31.914255 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:31.914448 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:31.914680 10037 main.go:141] libmachine: Using SSH client type: native
I0127 20:05:31.914990 10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0127 20:05:31.915001 10037 main.go:141] libmachine: About to run SSH command:
sudo hostname cert-expiration-729000 && echo "cert-expiration-729000" | sudo tee /etc/hostname
I0127 20:05:32.005292 10037 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-729000
I0127 20:05:32.005305 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.005433 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.005509 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.005592 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.005677 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.005790 10037 main.go:141] libmachine: Using SSH client type: native
I0127 20:05:32.005915 10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0127 20:05:32.005928 10037 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\scert-expiration-729000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-729000/g' /etc/hosts;
else
echo '127.0.1.1 cert-expiration-729000' | sudo tee -a /etc/hosts;
fi
fi
I0127 20:05:32.084026 10037 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 20:05:32.084037 10037 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3235/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3235/.minikube}
I0127 20:05:32.084051 10037 buildroot.go:174] setting up certificates
I0127 20:05:32.084060 10037 provision.go:83] configureAuth start
I0127 20:05:32.084065 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetMachineName
I0127 20:05:32.084203 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetIP
I0127 20:05:32.084289 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.084367 10037 provision.go:138] copyHostCerts
I0127 20:05:32.084446 10037 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem, removing ...
I0127 20:05:32.084454 10037 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem
I0127 20:05:32.097697 10037 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem (1082 bytes)
I0127 20:05:32.098068 10037 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem, removing ...
I0127 20:05:32.098079 10037 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem
I0127 20:05:32.098237 10037 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem (1123 bytes)
I0127 20:05:32.098509 10037 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem, removing ...
I0127 20:05:32.098515 10037 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem
I0127 20:05:32.098636 10037 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem (1675 bytes)
I0127 20:05:32.098849 10037 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-729000 san=[192.168.64.23 192.168.64.23 localhost 127.0.0.1 minikube cert-expiration-729000]
I0127 20:05:32.154738 10037 provision.go:172] copyRemoteCerts
I0127 20:05:32.154792 10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 20:05:32.154811 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.154937 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.155046 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.155128 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.155227 10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
I0127 20:05:32.200282 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0127 20:05:32.215501 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0127 20:05:32.230837 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0127 20:05:32.246059 10037 provision.go:86] duration metric: configureAuth took 161.987414ms
I0127 20:05:32.246067 10037 buildroot.go:189] setting minikube options for container-runtime
I0127 20:05:32.246204 10037 config.go:180] Loaded profile config "cert-expiration-729000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0127 20:05:32.246218 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:32.246351 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.246448 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.246525 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.246614 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.246685 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.246779 10037 main.go:141] libmachine: Using SSH client type: native
I0127 20:05:32.246878 10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0127 20:05:32.246883 10037 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0127 20:05:32.324943 10037 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0127 20:05:32.324949 10037 buildroot.go:70] root file system type: tmpfs
I0127 20:05:32.325081 10037 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0127 20:05:32.325100 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.325224 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.325296 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.325377 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.325454 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.325593 10037 main.go:141] libmachine: Using SSH client type: native
I0127 20:05:32.325700 10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0127 20:05:32.325745 10037 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0127 20:05:32.414803 10037 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0127 20:05:32.414820 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.414946 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.415032 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.415137 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.415217 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.415337 10037 main.go:141] libmachine: Using SSH client type: native
I0127 20:05:32.415450 10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0127 20:05:32.415459 10037 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0127 20:05:32.497800 10037 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 20:05:32.497806 10037 machine.go:91] provisioned docker machine in 584.931958ms
I0127 20:05:32.497814 10037 start.go:300] post-start starting for "cert-expiration-729000" (driver="hyperkit")
I0127 20:05:32.497818 10037 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 20:05:32.497828 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:32.498007 10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 20:05:32.498016 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.498104 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.498185 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.498251 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.498326 10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
I0127 20:05:32.543474 10037 ssh_runner.go:195] Run: cat /etc/os-release
I0127 20:05:32.546025 10037 info.go:137] Remote host: Buildroot 2021.02.12
I0127 20:05:32.546037 10037 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/addons for local assets ...
I0127 20:05:32.546116 10037 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/files for local assets ...
I0127 20:05:32.546259 10037 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem -> 44422.pem in /etc/ssl/certs
I0127 20:05:32.546409 10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 20:05:32.551985 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem --> /etc/ssl/certs/44422.pem (1708 bytes)
I0127 20:05:32.568131 10037 start.go:303] post-start completed in 70.312991ms
I0127 20:05:32.568143 10037 fix.go:57] fixHost completed within 729.063605ms
I0127 20:05:32.568156 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.568281 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.568381 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.568500 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.568584 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.568688 10037 main.go:141] libmachine: Using SSH client type: native
I0127 20:05:32.568798 10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0127 20:05:32.568803 10037 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0127 20:05:32.645897 10037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674878732.859804806
I0127 20:05:32.645903 10037 fix.go:207] guest clock: 1674878732.859804806
I0127 20:05:32.645907 10037 fix.go:220] Guest: 2023-01-27 20:05:32.859804806 -0800 PST Remote: 2023-01-27 20:05:32.568146 -0800 PST m=+1.187546836 (delta=291.658806ms)
I0127 20:05:32.645926 10037 fix.go:191] guest clock delta is within tolerance: 291.658806ms
I0127 20:05:32.645929 10037 start.go:83] releasing machines lock for "cert-expiration-729000", held for 806.892226ms
I0127 20:05:32.645944 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:32.646069 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetIP
I0127 20:05:32.646154 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:32.646466 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:32.646594 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:32.646682 10037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 20:05:32.646708 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.646724 10037 ssh_runner.go:195] Run: cat /version.json
I0127 20:05:32.646732 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:32.646804 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.646830 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:32.646886 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.646922 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:32.646952 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.646985 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:32.647020 10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
I0127 20:05:32.647079 10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
W0127 20:05:32.687525 10037 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
I0127 20:05:32.687589 10037 ssh_runner.go:195] Run: systemctl --version
I0127 20:05:32.751318 10037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 20:05:32.755512 10037 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 20:05:32.755580 10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0127 20:05:32.761318 10037 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0127 20:05:32.771947 10037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 20:05:32.777308 10037 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0127 20:05:32.777316 10037 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0127 20:05:32.777386 10037 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 20:05:32.793076 10037 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0127 20:05:32.793090 10037 docker.go:560] Images already preloaded, skipping extraction
I0127 20:05:32.793094 10037 start.go:472] detecting cgroup driver to use...
I0127 20:05:32.793172 10037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 20:05:32.805431 10037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0127 20:05:32.811696 10037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 20:05:32.818019 10037 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 20:05:32.818072 10037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 20:05:32.824633 10037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 20:05:32.830923 10037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 20:05:32.837823 10037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 20:05:32.844685 10037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 20:05:32.851797 10037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 20:05:32.858655 10037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 20:05:32.864826 10037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 20:05:32.871028 10037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 20:05:32.961920 10037 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 20:05:32.974188 10037 start.go:472] detecting cgroup driver to use...
I0127 20:05:32.974252 10037 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0127 20:05:32.983819 10037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0127 20:05:32.992737 10037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0127 20:05:33.005065 10037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0127 20:05:33.013689 10037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 20:05:33.022060 10037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 20:05:33.034812 10037 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0127 20:05:33.121278 10037 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0127 20:05:33.217290 10037 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0127 20:05:33.217303 10037 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0127 20:05:33.228482 10037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 20:05:33.323545 10037 ssh_runner.go:195] Run: sudo systemctl restart docker
I0127 20:05:34.547424 10037 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.223893832s)
I0127 20:05:34.547475 10037 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0127 20:05:34.632374 10037 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0127 20:05:34.716786 10037 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0127 20:05:34.799783 10037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 20:05:34.887497 10037 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0127 20:05:34.902733 10037 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0127 20:05:34.902807 10037 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0127 20:05:34.912654 10037 start.go:540] Will wait 60s for crictl version
I0127 20:05:34.912707 10037 ssh_runner.go:195] Run: which crictl
I0127 20:05:34.915300 10037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 20:05:34.979884 10037 start.go:556] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0127 20:05:34.979952 10037 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0127 20:05:35.002679 10037 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0127 20:05:35.071662 10037 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0127 20:05:35.071815 10037 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0127 20:05:35.075875 10037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 20:05:35.084734 10037 localpath.go:92] copying /Users/jenkins/minikube-integration/15565-3235/.minikube/client.crt -> /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/client.crt
I0127 20:05:35.085003 10037 localpath.go:117] copying /Users/jenkins/minikube-integration/15565-3235/.minikube/client.key -> /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/client.key
I0127 20:05:35.085182 10037 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0127 20:05:35.085232 10037 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 20:05:35.102018 10037 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0127 20:05:35.102025 10037 docker.go:560] Images already preloaded, skipping extraction
I0127 20:05:35.102093 10037 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0127 20:05:35.118516 10037 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0127 20:05:35.118531 10037 cache_images.go:84] Images are preloaded, skipping loading
I0127 20:05:35.118599 10037 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0127 20:05:35.145820 10037 cni.go:84] Creating CNI manager for ""
I0127 20:05:35.145831 10037 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0127 20:05:35.145851 10037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0127 20:05:35.145864 10037 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.23 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-729000 NodeName:cert-expiration-729000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0127 20:05:35.145952 10037 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.23
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "cert-expiration-729000"
kubeletExtraArgs:
node-ip: 192.168.64.23
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.23"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 20:05:35.146039 10037 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cert-expiration-729000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.23
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:cert-expiration-729000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0127 20:05:35.146095 10037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0127 20:05:35.152454 10037 binaries.go:44] Found k8s binaries, skipping transfer
I0127 20:05:35.152497 10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 20:05:35.158468 10037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (455 bytes)
I0127 20:05:35.169722 10037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 20:05:35.180566 10037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
I0127 20:05:35.191593 10037 ssh_runner.go:195] Run: grep 192.168.64.23 control-plane.minikube.internal$ /etc/hosts
I0127 20:05:35.193776 10037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.23 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 20:05:35.201372 10037 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000 for IP: 192.168.64.23
I0127 20:05:35.201382 10037 certs.go:186] acquiring lock for shared ca certs: {Name:mk29c07f32f81afc524ae789005062e84bfc25e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:35.201522 10037 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.key
I0127 20:05:35.201573 10037 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3235/.minikube/proxy-client-ca.key
I0127 20:05:35.201658 10037 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/client.key
I0127 20:05:35.201677 10037 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key.7d9037ca
I0127 20:05:35.201694 10037 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt.7d9037ca with IP's: [192.168.64.23 10.96.0.1 127.0.0.1 10.0.0.1]
I0127 20:05:35.325279 10037 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt.7d9037ca ...
I0127 20:05:35.325290 10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt.7d9037ca: {Name:mk4d91c120259812f82f819b1b530e466fc67aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:35.325577 10037 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key.7d9037ca ...
I0127 20:05:35.325582 10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key.7d9037ca: {Name:mk083b3eef99ce5d463fa9d03b82e06737dfbb52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:35.325756 10037 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt.7d9037ca -> /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt
I0127 20:05:35.326068 10037 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key.7d9037ca -> /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key
I0127 20:05:35.326310 10037 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.key
I0127 20:05:35.326326 10037 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.crt with IP's: []
I0127 20:05:35.402161 10037 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.crt ...
I0127 20:05:35.402168 10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.crt: {Name:mk4ff3a897a7964e3c4ef42aadbfba8d3de95f61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:35.402389 10037 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.key ...
I0127 20:05:35.402393 10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.key: {Name:mk62d9b128d7b53edaec6a6ba328bfd0e5b97f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:35.402756 10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/4442.pem (1338 bytes)
W0127 20:05:35.402790 10037 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/4442_empty.pem, impossibly tiny 0 bytes
I0127 20:05:35.402798 10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem (1679 bytes)
I0127 20:05:35.402828 10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem (1082 bytes)
I0127 20:05:35.402855 10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem (1123 bytes)
I0127 20:05:35.402882 10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem (1675 bytes)
I0127 20:05:35.402940 10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem (1708 bytes)
I0127 20:05:35.403401 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0127 20:05:35.419766 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 20:05:35.435246 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 20:05:35.450624 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 20:05:35.465897 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 20:05:35.481529 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0127 20:05:35.496878 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 20:05:35.512119 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0127 20:05:35.527392 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/4442.pem --> /usr/share/ca-certificates/4442.pem (1338 bytes)
I0127 20:05:35.542606 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem --> /usr/share/ca-certificates/44422.pem (1708 bytes)
I0127 20:05:35.558088 10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 20:05:35.573127 10037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 20:05:35.584246 10037 ssh_runner.go:195] Run: openssl version
I0127 20:05:35.587539 10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4442.pem && ln -fs /usr/share/ca-certificates/4442.pem /etc/ssl/certs/4442.pem"
I0127 20:05:35.594409 10037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4442.pem
I0127 20:05:35.597250 10037 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:34 /usr/share/ca-certificates/4442.pem
I0127 20:05:35.597284 10037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4442.pem
I0127 20:05:35.600762 10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4442.pem /etc/ssl/certs/51391683.0"
I0127 20:05:35.607660 10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44422.pem && ln -fs /usr/share/ca-certificates/44422.pem /etc/ssl/certs/44422.pem"
I0127 20:05:35.614726 10037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44422.pem
I0127 20:05:35.617559 10037 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:34 /usr/share/ca-certificates/44422.pem
I0127 20:05:35.617590 10037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44422.pem
I0127 20:05:35.620982 10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44422.pem /etc/ssl/certs/3ec20f2e.0"
I0127 20:05:35.627560 10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 20:05:35.634354 10037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 20:05:35.637216 10037 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:30 /usr/share/ca-certificates/minikubeCA.pem
I0127 20:05:35.637245 10037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 20:05:35.640648 10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 20:05:35.647393 10037 kubeadm.go:401] StartCluster: {Name:cert-expiration-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.26.1 ClusterName:cert-expiration-729000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0127 20:05:35.647472 10037 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0127 20:05:35.663103 10037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 20:05:35.669486 10037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 20:05:35.675606 10037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 20:05:35.681822 10037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 20:05:35.681840 10037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 20:05:35.747445 10037 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
I0127 20:05:35.747548 10037 kubeadm.go:322] [preflight] Running pre-flight checks
I0127 20:05:35.899233 10037 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 20:05:35.899313 10037 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 20:05:35.899384 10037 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0127 20:05:36.006971 10037 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 20:05:36.031134 10037 out.go:204] - Generating certificates and keys ...
I0127 20:05:36.031200 10037 kubeadm.go:322] [certs] Using existing ca certificate authority
I0127 20:05:36.031249 10037 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0127 20:05:36.107640 10037 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0127 20:05:36.358376 10037 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0127 20:05:32.313946 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:32.813573 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:33.313151 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:33.812358 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:34.312979 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:34.812234 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:35.313398 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:35.812812 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:36.312348 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:36.812404 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:36.503062 10037 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0127 20:05:36.954367 10037 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0127 20:05:37.070016 10037 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0127 20:05:37.070133 10037 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-729000 localhost] and IPs [192.168.64.23 127.0.0.1 ::1]
I0127 20:05:37.336269 10037 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0127 20:05:37.336543 10037 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-729000 localhost] and IPs [192.168.64.23 127.0.0.1 ::1]
I0127 20:05:37.659751 10037 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0127 20:05:37.902749 10037 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0127 20:05:38.151772 10037 kubeadm.go:322] [certs] Generating "sa" key and public key
I0127 20:05:38.151847 10037 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 20:05:38.255575 10037 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 20:05:38.634544 10037 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 20:05:38.837442 10037 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 20:05:39.068915 10037 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 20:05:39.079354 10037 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 20:05:39.080353 10037 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 20:05:39.080454 10037 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0127 20:05:39.170078 10037 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 20:05:39.195775 10037 out.go:204] - Booting up control plane ...
I0127 20:05:39.195861 10037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 20:05:39.195939 10037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 20:05:39.196000 10037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 20:05:39.196071 10037 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 20:05:39.196188 10037 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0127 20:05:37.312379 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:37.814101 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:38.312806 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:38.812537 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:39.313304 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:39.812630 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:40.313115 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:40.812862 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:41.313786 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:41.813804 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:42.312911 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:42.813263 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:43.314156 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:43.812685 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:44.312470 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:44.812340 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:45.313554 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:45.813228 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:46.313716 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:46.813699 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:49.674793 10037 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.501763 seconds
I0127 20:05:49.674884 10037 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 20:05:49.683872 10037 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 20:05:47.311991 9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:47.318121 9771 api_server.go:71] duration metric: took 22.013115013s to wait for apiserver process to appear ...
I0127 20:05:47.318136 9771 api_server.go:87] waiting for apiserver healthz status ...
I0127 20:05:47.318148 9771 api_server.go:252] Checking apiserver healthz at https://192.168.64.25:8443/healthz ...
I0127 20:05:52.699590 10037 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0127 20:05:52.699750 10037 kubeadm.go:322] [mark-control-plane] Marking the node cert-expiration-729000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 20:05:53.207742 10037 kubeadm.go:322] [bootstrap-token] Using token: rrg0sd.e2ykdqsntf61sc8i
I0127 20:05:53.246595 10037 out.go:204] - Configuring RBAC rules ...
I0127 20:05:53.246717 10037 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 20:05:53.248053 10037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 20:05:53.254260 10037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 20:05:53.257283 10037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 20:05:53.260065 10037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 20:05:53.263640 10037 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 20:05:53.272369 10037 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 20:05:53.443381 10037 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0127 20:05:53.651092 10037 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0127 20:05:53.651982 10037 kubeadm.go:322]
I0127 20:05:53.652043 10037 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0127 20:05:53.652048 10037 kubeadm.go:322]
I0127 20:05:53.652122 10037 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0127 20:05:53.652128 10037 kubeadm.go:322]
I0127 20:05:53.652144 10037 kubeadm.go:322] mkdir -p $HOME/.kube
I0127 20:05:53.652202 10037 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 20:05:53.652236 10037 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 20:05:53.652245 10037 kubeadm.go:322]
I0127 20:05:53.652286 10037 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0127 20:05:53.652289 10037 kubeadm.go:322]
I0127 20:05:53.652334 10037 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 20:05:53.652338 10037 kubeadm.go:322]
I0127 20:05:53.652379 10037 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0127 20:05:53.652442 10037 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 20:05:53.652496 10037 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 20:05:53.652511 10037 kubeadm.go:322]
I0127 20:05:53.652571 10037 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0127 20:05:53.652615 10037 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0127 20:05:53.652617 10037 kubeadm.go:322]
I0127 20:05:53.652675 10037 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rrg0sd.e2ykdqsntf61sc8i \
I0127 20:05:53.652751 10037 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:76459747d447fbe53349461588d71983b7f5033bb09648befce7f96802f57b57 \
I0127 20:05:53.652764 10037 kubeadm.go:322] --control-plane
I0127 20:05:53.652766 10037 kubeadm.go:322]
I0127 20:05:53.652829 10037 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0127 20:05:53.652833 10037 kubeadm.go:322]
I0127 20:05:53.652892 10037 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rrg0sd.e2ykdqsntf61sc8i \
I0127 20:05:53.652989 10037 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:76459747d447fbe53349461588d71983b7f5033bb09648befce7f96802f57b57
I0127 20:05:53.654056 10037 kubeadm.go:322] W0128 04:05:35.958983 1701 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0127 20:05:53.654134 10037 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 20:05:53.654149 10037 cni.go:84] Creating CNI manager for ""
I0127 20:05:53.654157 10037 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0127 20:05:53.713238 10037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 20:05:53.750543 10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 20:05:53.762941 10037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0127 20:05:53.774927 10037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 20:05:53.774994 10037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 20:05:53.774997 10037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=1a22b9432724c1a7c0bfc1f92a18db163006c245 minikube.k8s.io/name=cert-expiration-729000 minikube.k8s.io/updated_at=2023_01_27T20_05_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0127 20:05:53.797009 10037 ops.go:34] apiserver oom_adj: -16
I0127 20:05:53.888278 10037 kubeadm.go:1073] duration metric: took 113.33693ms to wait for elevateKubeSystemPrivileges.
I0127 20:05:53.915378 10037 kubeadm.go:403] StartCluster complete in 18.268414695s
I0127 20:05:53.915399 10037 settings.go:142] acquiring lock: {Name:mk80549a2c3028803e331f0580d721d5d766bd61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:53.915479 10037 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15565-3235/kubeconfig
I0127 20:05:53.916067 10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/kubeconfig: {Name:mk69cf50f5abd22c9a63615b05ca8d5c80e5d91b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:53.916308 10037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0127 20:05:53.916328 10037 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0127 20:05:53.916371 10037 addons.go:65] Setting storage-provisioner=true in profile "cert-expiration-729000"
I0127 20:05:53.916371 10037 addons.go:65] Setting default-storageclass=true in profile "cert-expiration-729000"
I0127 20:05:53.916383 10037 addons.go:227] Setting addon storage-provisioner=true in "cert-expiration-729000"
W0127 20:05:53.916385 10037 addons.go:236] addon storage-provisioner should already be in state true
I0127 20:05:53.916385 10037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-729000"
I0127 20:05:53.916417 10037 host.go:66] Checking if "cert-expiration-729000" exists ...
I0127 20:05:53.916458 10037 config.go:180] Loaded profile config "cert-expiration-729000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0127 20:05:53.916654 10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:53.916672 10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:53.916708 10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:53.916719 10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:53.925102 10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52610
I0127 20:05:53.925581 10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52612
I0127 20:05:53.925625 10037 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:53.926020 10037 main.go:141] libmachine: Using API Version 1
I0127 20:05:53.926027 10037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:53.926045 10037 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:53.926259 10037 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:53.926370 10037 main.go:141] libmachine: Using API Version 1
I0127 20:05:53.926377 10037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:53.926669 10037 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:53.926824 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetState
I0127 20:05:53.926952 10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:53.926967 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0127 20:05:53.926971 10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:53.927074 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | hyperkit pid from json: 9398
I0127 20:05:53.934868 10037 addons.go:227] Setting addon default-storageclass=true in "cert-expiration-729000"
W0127 20:05:53.934885 10037 addons.go:236] addon default-storageclass should already be in state true
I0127 20:05:53.934915 10037 host.go:66] Checking if "cert-expiration-729000" exists ...
I0127 20:05:53.935175 10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52614
I0127 20:05:53.935478 10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:53.935496 10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:53.936236 10037 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:53.937460 10037 main.go:141] libmachine: Using API Version 1
I0127 20:05:53.937480 10037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:53.937720 10037 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:53.937844 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetState
I0127 20:05:53.937948 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0127 20:05:53.938075 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | hyperkit pid from json: 9398
I0127 20:05:53.939003 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:53.943316 10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52616
I0127 20:05:53.980584 10037 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 20:05:52.262532 9771 api_server.go:278] https://192.168.64.25:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 20:05:52.262557 9771 api_server.go:102] status: https://192.168.64.25:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 20:05:52.762782 9771 api_server.go:252] Checking apiserver healthz at https://192.168.64.25:8443/healthz ...
I0127 20:05:52.769733 9771 api_server.go:278] https://192.168.64.25:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0127 20:05:52.769753 9771 api_server.go:102] status: https://192.168.64.25:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0127 20:05:53.263140 9771 api_server.go:252] Checking apiserver healthz at https://192.168.64.25:8443/healthz ...
I0127 20:05:53.267540 9771 api_server.go:278] https://192.168.64.25:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0127 20:05:53.267554 9771 api_server.go:102] status: https://192.168.64.25:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0127 20:05:53.764362 9771 api_server.go:252] Checking apiserver healthz at https://192.168.64.25:8443/healthz ...
I0127 20:05:53.768632 9771 api_server.go:278] https://192.168.64.25:8443/healthz returned 200:
ok
I0127 20:05:53.773903 9771 api_server.go:140] control plane version: v1.17.0
I0127 20:05:53.773919 9771 api_server.go:130] duration metric: took 6.455929345s to wait for apiserver health ...
I0127 20:05:53.773925 9771 cni.go:84] Creating CNI manager for ""
I0127 20:05:53.773936 9771 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0127 20:05:53.773948 9771 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 20:05:53.778226 9771 system_pods.go:59] 4 kube-system pods found
I0127 20:05:53.778243 9771 system_pods.go:61] "coredns-6955765f44-4kg27" [9b2e9e1b-c463-40b7-a832-cd1b27921930] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0127 20:05:53.778250 9771 system_pods.go:61] "coredns-6955765f44-7ffc4" [ccdf5641-e668-4b3b-9b72-ede33ad90867] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0127 20:05:53.778254 9771 system_pods.go:61] "kube-proxy-nv5hs" [4137d463-f671-42bd-b020-b1bfbcef217e] Pending
I0127 20:05:53.778258 9771 system_pods.go:61] "storage-provisioner" [d5b749b0-341b-4e5f-adfb-46c6f48adb45] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0127 20:05:53.778262 9771 system_pods.go:74] duration metric: took 4.309335ms to wait for pod list to return data ...
I0127 20:05:53.778268 9771 node_conditions.go:102] verifying NodePressure condition ...
I0127 20:05:53.780400 9771 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
I0127 20:05:53.780415 9771 node_conditions.go:123] node cpu capacity is 2
I0127 20:05:53.780429 9771 node_conditions.go:105] duration metric: took 2.156977ms to run NodePressure ...
I0127 20:05:53.780442 9771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 20:05:54.007475 9771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 20:05:54.014547 9771 ops.go:34] apiserver oom_adj: -16
I0127 20:05:54.014556 9771 kubeadm.go:637] restartCluster took 40.034616433s
I0127 20:05:54.014561 9771 kubeadm.go:403] StartCluster complete in 40.06472281s
I0127 20:05:54.014574 9771 settings.go:142] acquiring lock: {Name:mk80549a2c3028803e331f0580d721d5d766bd61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:54.014638 9771 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15565-3235/kubeconfig
I0127 20:05:54.015328 9771 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/kubeconfig: {Name:mk69cf50f5abd22c9a63615b05ca8d5c80e5d91b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 20:05:54.015605 9771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0127 20:05:54.015621 9771 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0127 20:05:54.015691 9771 addons.go:65] Setting storage-provisioner=true in profile "running-upgrade-052000"
I0127 20:05:54.015691 9771 addons.go:65] Setting default-storageclass=true in profile "running-upgrade-052000"
I0127 20:05:54.015710 9771 addons.go:227] Setting addon storage-provisioner=true in "running-upgrade-052000"
I0127 20:05:54.015717 9771 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-052000"
W0127 20:05:54.015719 9771 addons.go:236] addon storage-provisioner should already be in state true
I0127 20:05:54.015774 9771 host.go:66] Checking if "running-upgrade-052000" exists ...
I0127 20:05:54.015788 9771 config.go:180] Loaded profile config "running-upgrade-052000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0127 20:05:54.016106 9771 kapi.go:59] client config for running-upgrade-052000: &rest.Config{Host:"https://192.168.64.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/running-upgrade-052000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/running-upgrade-052000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 20:05:54.016178 9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:54.016205 9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:54.016265 9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:54.016292 9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:54.025271 9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52622
I0127 20:05:54.025809 9771 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:54.026088 9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52624
I0127 20:05:54.026293 9771 main.go:141] libmachine: Using API Version 1
I0127 20:05:54.026320 9771 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:54.026550 9771 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:54.026603 9771 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:54.026971 9771 main.go:141] libmachine: Using API Version 1
I0127 20:05:54.026986 9771 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:54.027085 9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:54.027116 9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:54.027302 9771 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:54.028806 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetState
I0127 20:05:54.030254 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0127 20:05:54.030374 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | hyperkit pid from json: 9576
I0127 20:05:54.031448 9771 kapi.go:59] client config for running-upgrade-052000: &rest.Config{Host:"https://192.168.64.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/running-upgrade-052000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/running-upgrade-052000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 20:05:54.035700 9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52626
I0127 20:05:54.036101 9771 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:54.036490 9771 main.go:141] libmachine: Using API Version 1
I0127 20:05:54.036508 9771 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:54.036808 9771 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:54.036948 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetState
I0127 20:05:54.037107 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0127 20:05:54.037246 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | hyperkit pid from json: 9576
I0127 20:05:54.038373 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .DriverName
I0127 20:05:54.042905 9771 addons.go:227] Setting addon default-storageclass=true in "running-upgrade-052000"
I0127 20:05:54.060613 9771 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
W0127 20:05:54.060622 9771 addons.go:236] addon default-storageclass should already be in state true
I0127 20:05:54.060663 9771 host.go:66] Checking if "running-upgrade-052000" exists ...
I0127 20:05:54.081731 9771 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 20:05:54.081743 9771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 20:05:54.081781 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHHostname
I0127 20:05:54.081969 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHPort
I0127 20:05:54.082032 9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:54.082062 9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:54.082099 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHKeyPath
I0127 20:05:54.082211 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHUsername
I0127 20:05:54.082730 9771 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/running-upgrade-052000/id_rsa Username:docker}
I0127 20:05:54.090512 9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52629
I0127 20:05:54.090965 9771 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:54.091549 9771 main.go:141] libmachine: Using API Version 1
I0127 20:05:54.091576 9771 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:54.091818 9771 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:54.092235 9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:54.092264 9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:54.099956 9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52631
I0127 20:05:54.100372 9771 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:54.100766 9771 main.go:141] libmachine: Using API Version 1
I0127 20:05:54.100781 9771 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:54.101015 9771 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:54.101137 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetState
I0127 20:05:54.101246 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0127 20:05:54.101340 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | hyperkit pid from json: 9576
I0127 20:05:54.102298 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .DriverName
I0127 20:05:54.102491 9771 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0127 20:05:54.102502 9771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 20:05:54.102512 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHHostname
I0127 20:05:54.102615 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHPort
I0127 20:05:54.102715 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHKeyPath
I0127 20:05:54.102830 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHUsername
I0127 20:05:54.102917 9771 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/running-upgrade-052000/id_rsa Username:docker}
I0127 20:05:54.106491 9771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0127 20:05:54.128516 9771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 20:05:54.165587 9771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 20:05:54.319172 9771 start.go:908] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS's ConfigMap
I0127 20:05:54.434474 9771 main.go:141] libmachine: Making call to close driver server
I0127 20:05:54.434493 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
I0127 20:05:54.434647 9771 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:54.434656 9771 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:54.434667 9771 main.go:141] libmachine: Making call to close driver server
I0127 20:05:54.434675 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
I0127 20:05:54.434818 9771 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:54.434827 9771 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:54.434838 9771 main.go:141] libmachine: Making call to close driver server
I0127 20:05:54.434846 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
I0127 20:05:54.434936 9771 main.go:141] libmachine: Making call to close driver server
I0127 20:05:54.434951 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
I0127 20:05:54.435010 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | Closing plugin on server side
I0127 20:05:54.435089 9771 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:54.435128 9771 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:54.435135 9771 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:54.435159 9771 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:54.435180 9771 main.go:141] libmachine: Making call to close driver server
I0127 20:05:54.435223 9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
I0127 20:05:54.435237 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | Closing plugin on server side
I0127 20:05:54.435370 9771 main.go:141] libmachine: (running-upgrade-052000) DBG | Closing plugin on server side
I0127 20:05:54.435452 9771 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:54.435474 9771 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:53.981052 10037 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:54.000605 10037 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 20:05:54.000616 10037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 20:05:54.000634 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:54.000859 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:54.001041 10037 main.go:141] libmachine: Using API Version 1
I0127 20:05:54.001076 10037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:54.001082 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:54.001319 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:54.001562 10037 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:54.001627 10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
I0127 20:05:54.002238 10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0127 20:05:54.002266 10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0127 20:05:54.005684 10037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0127 20:05:54.010175 10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52619
I0127 20:05:54.010507 10037 main.go:141] libmachine: () Calling .GetVersion
I0127 20:05:54.010870 10037 main.go:141] libmachine: Using API Version 1
I0127 20:05:54.010883 10037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 20:05:54.011081 10037 main.go:141] libmachine: () Calling .GetMachineName
I0127 20:05:54.011171 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetState
I0127 20:05:54.011268 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0127 20:05:54.011350 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | hyperkit pid from json: 9398
I0127 20:05:54.012255 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
I0127 20:05:54.012418 10037 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0127 20:05:54.012423 10037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 20:05:54.012431 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
I0127 20:05:54.012509 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
I0127 20:05:54.012611 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
I0127 20:05:54.012702 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
I0127 20:05:54.012775 10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
I0127 20:05:54.066197 10037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 20:05:54.087054 10037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 20:05:54.439646 10037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-729000" context rescaled to 1 replicas
I0127 20:05:54.439665 10037 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0127 20:05:54.473718 9771 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0127 20:05:54.514486 10037 out.go:177] * Verifying Kubernetes components...
I0127 20:05:54.514480 9771 addons.go:488] enableAddons completed in 498.883283ms
I0127 20:05:54.589753 9771 kapi.go:248] "coredns" deployment in "kube-system" namespace and "running-upgrade-052000" context rescaled to 1 replicas
I0127 20:05:54.589782 9771 start.go:221] Will wait 6m0s for node &{Name:minikube IP:192.168.64.25 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0127 20:05:54.610530 9771 out.go:177] * Verifying Kubernetes components...
I0127 20:05:54.668693 9771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 20:05:54.675145 9771 kubeadm.go:515] skip waiting for components based on config.
I0127 20:05:54.675159 9771 node_conditions.go:102] verifying NodePressure condition ...
I0127 20:05:54.686532 9771 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
I0127 20:05:54.686547 9771 node_conditions.go:123] node cpu capacity is 2
I0127 20:05:54.686554 9771 node_conditions.go:105] duration metric: took 11.391146ms to run NodePressure ...
I0127 20:05:54.686561 9771 start.go:226] waiting for startup goroutines ...
I0127 20:05:54.686921 9771 ssh_runner.go:195] Run: rm -f paused
I0127 20:05:54.726717 9771 start.go:538] kubectl: 1.25.4, cluster: 1.17.0 (minor skew: 8)
I0127 20:05:54.747501 9771 out.go:177]
W0127 20:05:54.784939 9771 out.go:239] ! /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.17.0.
I0127 20:05:54.822646 9771 out.go:177] - Want kubectl v1.17.0? Try 'minikube kubectl -- get pods -A'
I0127 20:05:54.880752 9771 out.go:177] * Done! kubectl is now configured to use "running-upgrade-052000" cluster and "" namespace by default
I0127 20:05:54.588794 10037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 20:05:54.695327 10037 start.go:908] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS's ConfigMap
I0127 20:05:55.019014 10037 main.go:141] libmachine: Making call to close driver server
I0127 20:05:55.019024 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
I0127 20:05:55.019040 10037 main.go:141] libmachine: Making call to close driver server
I0127 20:05:55.019049 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
I0127 20:05:55.019263 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | Closing plugin on server side
I0127 20:05:55.019289 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | Closing plugin on server side
I0127 20:05:55.019297 10037 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:55.019305 10037 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:55.019310 10037 main.go:141] libmachine: Making call to close driver server
I0127 20:05:55.019318 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
I0127 20:05:55.019306 10037 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:55.019376 10037 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:55.019408 10037 main.go:141] libmachine: Making call to close driver server
I0127 20:05:55.019420 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
I0127 20:05:55.019553 10037 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:55.019560 10037 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:55.019599 10037 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:55.019605 10037 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:55.019614 10037 main.go:141] libmachine: Making call to close driver server
I0127 20:05:55.019620 10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
I0127 20:05:55.019633 10037 main.go:141] libmachine: (cert-expiration-729000) DBG | Closing plugin on server side
I0127 20:05:55.019825 10037 main.go:141] libmachine: Successfully made call to close driver server
I0127 20:05:55.019831 10037 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 20:05:55.020129 10037 api_server.go:51] waiting for apiserver process to appear ...
I0127 20:05:55.041684 10037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0127 20:05:55.041785 10037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 20:05:55.115731 10037 addons.go:488] enableAddons completed in 1.199391115s
I0127 20:05:55.127154 10037 api_server.go:71] duration metric: took 687.485477ms to wait for apiserver process to appear ...
I0127 20:05:55.127168 10037 api_server.go:87] waiting for apiserver healthz status ...
I0127 20:05:55.127186 10037 api_server.go:252] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0127 20:05:55.131531 10037 api_server.go:278] https://192.168.64.23:8443/healthz returned 200:
ok
I0127 20:05:55.132234 10037 api_server.go:140] control plane version: v1.26.1
I0127 20:05:55.132244 10037 api_server.go:130] duration metric: took 5.073117ms to wait for apiserver health ...
I0127 20:05:55.132254 10037 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 20:05:55.137518 10037 system_pods.go:59] 5 kube-system pods found
I0127 20:05:55.137538 10037 system_pods.go:61] "etcd-cert-expiration-729000" [0bec7128-396f-402f-8948-9bf5db76a8fd] Pending
I0127 20:05:55.137543 10037 system_pods.go:61] "kube-apiserver-cert-expiration-729000" [6e5cac21-ac6e-4c04-9878-fd9d325fb961] Pending
I0127 20:05:55.137547 10037 system_pods.go:61] "kube-controller-manager-cert-expiration-729000" [9f8e6296-e586-4599-a39e-3dd88b191593] Pending
I0127 20:05:55.137551 10037 system_pods.go:61] "kube-scheduler-cert-expiration-729000" [87e82be0-8864-4da6-8593-83145af5e215] Pending
I0127 20:05:55.137564 10037 system_pods.go:61] "storage-provisioner" [eea6ffb0-41ec-4ee7-baca-63c758359b69] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
I0127 20:05:55.137568 10037 system_pods.go:74] duration metric: took 5.311376ms to wait for pod list to return data ...
I0127 20:05:55.137575 10037 kubeadm.go:578] duration metric: took 697.910946ms to wait for : map[apiserver:true system_pods:true] ...
I0127 20:05:55.137587 10037 node_conditions.go:102] verifying NodePressure condition ...
I0127 20:05:55.140104 10037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0127 20:05:55.140118 10037 node_conditions.go:123] node cpu capacity is 2
I0127 20:05:55.140126 10037 node_conditions.go:105] duration metric: took 2.536474ms to run NodePressure ...
I0127 20:05:55.140132 10037 start.go:226] waiting for startup goroutines ...
I0127 20:05:55.140457 10037 ssh_runner.go:195] Run: rm -f paused
I0127 20:05:55.179678 10037 start.go:538] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
I0127 20:05:55.200419 10037 out.go:177] * Done! kubectl is now configured to use "cert-expiration-729000" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Sat 2023-01-28 04:02:18 UTC, ends at Sat 2023-01-28 04:05:56 UTC. --
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.306380277Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5fac51758de637db5711bdcade68115305c238d1b6062536e6b473f11ee970f8 pid=2034 runtime=io.containerd.runc.v2
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.474321665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.474448207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.474506449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.474753684Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/09f9be266e60386919d6e9174c77ec460072c4907c415e904e1cb3da22c0656b pid=2068 runtime=io.containerd.runc.v2
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.547667876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.547708778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.547716466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.547815240Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e100c9dfaa645a97d0a481e5cd4a14603764b569114d3c7e5837834abbee083f pid=2103 runtime=io.containerd.runc.v2
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.880236211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.880473337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.880559102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.880843667Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e90861c322335deda0553b14a4f8943dab0937810203b8ee9bc4978e9ddb011c pid=2163 runtime=io.containerd.runc.v2
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.889441115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.889809966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.889872917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.890642983Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ee29e39c94dd8fd9ef9118d80751491f39701fc8750a36ee7daff2642c1ddeb8 pid=2190 runtime=io.containerd.runc.v2
Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.071856013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.071916984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.071926404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.072460166Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fe78b76dec737fec3cd6b49fffe5e96c82d8d185f9a058d934c83ca9c08f0802 pid=2238 runtime=io.containerd.runc.v2
Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.486889975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.486962981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.486973213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.487298302Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ef2d7d5804f0b250b1d706ad02493a4bd07a8ca2d098b17780b5de98262d4acb pid=2314 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
ef2d7d5804f0b 655493523f607 10 seconds ago Running kube-scheduler 0 fe78b76dec737
ee29e39c94dd8 e9c08e11b07f6 11 seconds ago Running kube-controller-manager 0 09f9be266e603
e90861c322335 deb04688c4a35 11 seconds ago Running kube-apiserver 0 e100c9dfaa645
5fac51758de63 fce326961ae2d 11 seconds ago Running etcd 0 b2a84ed55df97
*
* ==> describe nodes <==
* Name: cert-expiration-729000
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=cert-expiration-729000
kubernetes.io/os=linux
minikube.k8s.io/commit=1a22b9432724c1a7c0bfc1f92a18db163006c245
minikube.k8s.io/name=cert-expiration-729000
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_27T20_05_53_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 28 Jan 2023 04:05:52 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: cert-expiration-729000
AcquireTime: <unset>
RenewTime: Sat, 28 Jan 2023 04:05:53 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 28 Jan 2023 04:05:55 +0000 Sat, 28 Jan 2023 04:05:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 28 Jan 2023 04:05:55 +0000 Sat, 28 Jan 2023 04:05:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 28 Jan 2023 04:05:55 +0000 Sat, 28 Jan 2023 04:05:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 28 Jan 2023 04:05:55 +0000 Sat, 28 Jan 2023 04:05:55 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.23
Hostname: cert-expiration-729000
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
System Info:
Machine ID: 139e4a18d81a4104bb2f65dd3a7d7d81
System UUID: 8ab711ed-0000-0000-8fe6-149d997fca88
Boot ID: 103ebed0-4a1d-40ed-9dd1-e3571ca42c78
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-cert-expiration-729000 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 3s
kube-system kube-apiserver-cert-expiration-729000 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3s
kube-system kube-controller-manager-cert-expiration-729000 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3s
kube-system kube-scheduler-cert-expiration-729000 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (32%!)(MISSING) 0 (0%!)(MISSING)
memory 100Mi (5%!)(MISSING) 0 (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 3s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3s kubelet Node cert-expiration-729000 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3s kubelet Node cert-expiration-729000 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3s kubelet Node cert-expiration-729000 status is now: NodeHasSufficientPID
Normal NodeReady 1s kubelet Node cert-expiration-729000 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.972526] systemd-fstab-generator[531]: Ignoring "noauto" for root device
[ +0.088260] systemd-fstab-generator[542]: Ignoring "noauto" for root device
[ +5.501415] systemd-fstab-generator[730]: Ignoring "noauto" for root device
[ +1.234658] kauditd_printk_skb: 16 callbacks suppressed
[ +0.214029] systemd-fstab-generator[892]: Ignoring "noauto" for root device
[ +0.202014] systemd-fstab-generator[927]: Ignoring "noauto" for root device
[ +0.090948] systemd-fstab-generator[938]: Ignoring "noauto" for root device
[ +0.099109] systemd-fstab-generator[951]: Ignoring "noauto" for root device
[ +1.312105] systemd-fstab-generator[1100]: Ignoring "noauto" for root device
[ +0.081311] systemd-fstab-generator[1111]: Ignoring "noauto" for root device
[ +0.097154] systemd-fstab-generator[1122]: Ignoring "noauto" for root device
[ +0.088567] systemd-fstab-generator[1133]: Ignoring "noauto" for root device
[Jan28 04:05] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
[ +0.167442] systemd-fstab-generator[1321]: Ignoring "noauto" for root device
[ +0.092339] systemd-fstab-generator[1332]: Ignoring "noauto" for root device
[ +0.105251] systemd-fstab-generator[1345]: Ignoring "noauto" for root device
[ +1.175498] kauditd_printk_skb: 68 callbacks suppressed
[ +0.136199] systemd-fstab-generator[1492]: Ignoring "noauto" for root device
[ +0.088589] systemd-fstab-generator[1503]: Ignoring "noauto" for root device
[ +0.079662] systemd-fstab-generator[1514]: Ignoring "noauto" for root device
[ +0.092603] systemd-fstab-generator[1525]: Ignoring "noauto" for root device
[ +4.271616] systemd-fstab-generator[1776]: Ignoring "noauto" for root device
[ +0.421234] kauditd_printk_skb: 29 callbacks suppressed
[ +13.790743] systemd-fstab-generator[2536]: Ignoring "noauto" for root device
*
* ==> etcd [5fac51758de6] <==
* {"level":"info","ts":"2023-01-28T04:05:45.727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 switched to configuration voters=(3857958311015864865)"}
{"level":"info","ts":"2023-01-28T04:05:45.727Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf21a475ce91bca1","local-member-id":"358a38a4be5dda21","added-peer-id":"358a38a4be5dda21","added-peer-peer-urls":["https://192.168.64.23:2380"]}
{"level":"info","ts":"2023-01-28T04:05:45.739Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-28T04:05:45.739Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"358a38a4be5dda21","initial-advertise-peer-urls":["https://192.168.64.23:2380"],"listen-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-28T04:05:45.740Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-28T04:05:45.742Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.23:2380"}
{"level":"info","ts":"2023-01-28T04:05:45.742Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.23:2380"}
{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 is starting a new election at term 1"}
{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became pre-candidate at term 1"}
{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgPreVoteResp from 358a38a4be5dda21 at term 1"}
{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became candidate at term 2"}
{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgVoteResp from 358a38a4be5dda21 at term 2"}
{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became leader at term 2"}
{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 358a38a4be5dda21 elected leader 358a38a4be5dda21 at term 2"}
{"level":"info","ts":"2023-01-28T04:05:46.115Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"358a38a4be5dda21","local-member-attributes":"{Name:cert-expiration-729000 ClientURLs:[https://192.168.64.23:2379]}","request-path":"/0/members/358a38a4be5dda21/attributes","cluster-id":"bf21a475ce91bca1","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-28T04:05:46.115Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-28T04:05:46.116Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.23:2379"}
{"level":"info","ts":"2023-01-28T04:05:46.116Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T04:05:46.116Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-28T04:05:46.117Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-28T04:05:46.119Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-28T04:05:46.123Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-28T04:05:46.124Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf21a475ce91bca1","local-member-id":"358a38a4be5dda21","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T04:05:46.124Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T04:05:46.124Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
*
* ==> kernel <==
* 04:05:57 up 3 min, 0 users, load average: 0.42, 0.12, 0.03
Linux cert-expiration-729000 5.10.57 #1 SMP Sat Jan 28 02:15:18 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [e90861c32233] <==
* I0128 04:05:48.595703 1 controller.go:615] quota admission added evaluator for: namespaces
I0128 04:05:48.614923 1 cache.go:39] Caches are synced for autoregister controller
I0128 04:05:48.615279 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0128 04:05:48.616243 1 shared_informer.go:280] Caches are synced for configmaps
I0128 04:05:48.616973 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0128 04:05:48.617049 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0128 04:05:48.617136 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0128 04:05:48.619213 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0128 04:05:48.619306 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0128 04:05:48.644426 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0128 04:05:48.656786 1 shared_informer.go:280] Caches are synced for node_authorizer
I0128 04:05:49.319314 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0128 04:05:49.520112 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0128 04:05:49.527916 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0128 04:05:49.527949 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0128 04:05:49.839985 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0128 04:05:49.861833 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0128 04:05:49.910412 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0128 04:05:49.916919 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.64.23]
I0128 04:05:49.917742 1 controller.go:615] quota admission added evaluator for: endpoints
I0128 04:05:49.920179 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0128 04:05:50.574873 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0128 04:05:53.660454 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0128 04:05:53.667353 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0128 04:05:53.675800 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
*
* ==> kube-controller-manager [ee29e39c94dd] <==
* I0128 04:05:50.582791 1 cronjob_controllerv2.go:137] "Starting cronjob controller v2"
I0128 04:05:50.583297 1 shared_informer.go:273] Waiting for caches to sync for cronjob
I0128 04:05:50.588319 1 controllermanager.go:622] Started "csrapproving"
I0128 04:05:50.588575 1 certificate_controller.go:112] Starting certificate controller "csrapproving"
I0128 04:05:50.588658 1 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving
I0128 04:05:50.590188 1 controllermanager.go:622] Started "csrcleaner"
I0128 04:05:50.590199 1 cleaner.go:82] Starting CSR cleaner controller
I0128 04:05:50.596558 1 node_lifecycle_controller.go:492] Controller will reconcile labels.
I0128 04:05:50.596604 1 controllermanager.go:622] Started "nodelifecycle"
I0128 04:05:50.596833 1 node_lifecycle_controller.go:527] Sending events to api server.
I0128 04:05:50.596866 1 node_lifecycle_controller.go:538] Starting node controller
I0128 04:05:50.596872 1 shared_informer.go:273] Waiting for caches to sync for taint
I0128 04:05:50.602075 1 controllermanager.go:622] Started "podgc"
I0128 04:05:50.602279 1 gc_controller.go:102] Starting GC controller
I0128 04:05:50.602307 1 shared_informer.go:273] Waiting for caches to sync for GC
I0128 04:05:50.607603 1 controllermanager.go:622] Started "serviceaccount"
I0128 04:05:50.607801 1 serviceaccounts_controller.go:111] Starting service account controller
I0128 04:05:50.607830 1 shared_informer.go:273] Waiting for caches to sync for service account
I0128 04:05:50.613104 1 controllermanager.go:622] Started "replicaset"
I0128 04:05:50.613413 1 replica_set.go:201] Starting replicaset controller
I0128 04:05:50.613421 1 shared_informer.go:273] Waiting for caches to sync for ReplicaSet
I0128 04:05:50.619157 1 controllermanager.go:622] Started "persistentvolume-binder"
I0128 04:05:50.619699 1 pv_controller_base.go:318] Starting persistent volume controller
I0128 04:05:50.619726 1 shared_informer.go:273] Waiting for caches to sync for persistent volume
I0128 04:05:50.670200 1 shared_informer.go:280] Caches are synced for tokens
*
* ==> kube-scheduler [ef2d7d5804f0] <==
* W0128 04:05:48.604286 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0128 04:05:48.604294 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0128 04:05:48.605697 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0128 04:05:48.605753 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0128 04:05:48.605933 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0128 04:05:48.605984 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0128 04:05:48.606062 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0128 04:05:48.606108 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0128 04:05:48.606151 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0128 04:05:48.606196 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0128 04:05:48.606267 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0128 04:05:48.606315 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0128 04:05:49.508724 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0128 04:05:49.508800 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0128 04:05:49.518012 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0128 04:05:49.518085 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0128 04:05:49.580469 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0128 04:05:49.580567 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0128 04:05:49.613964 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0128 04:05:49.614001 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0128 04:05:49.651546 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0128 04:05:49.651616 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0128 04:05:49.700131 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0128 04:05:49.700167 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0128 04:05:49.995064 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Sat 2023-01-28 04:02:18 UTC, ends at Sat 2023-01-28 04:05:57 UTC. --
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064103 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-k8s-certs\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064191 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf4ae35f72ae3f906ee969eaec31c6b5-kubeconfig\") pod \"kube-scheduler-cert-expiration-729000\" (UID: \"bf4ae35f72ae3f906ee969eaec31c6b5\") " pod="kube-system/kube-scheduler-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064213 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/777e73f7ad1ac012089ecaaa2fcbabab-etcd-data\") pod \"etcd-cert-expiration-729000\" (UID: \"777e73f7ad1ac012089ecaaa2fcbabab\") " pod="kube-system/etcd-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064232 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f7b59a0f3e7bb502743e8d8e38b7d9a-ca-certs\") pod \"kube-apiserver-cert-expiration-729000\" (UID: \"3f7b59a0f3e7bb502743e8d8e38b7d9a\") " pod="kube-system/kube-apiserver-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064252 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-ca-certs\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064313 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-flexvolume-dir\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064400 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-kubeconfig\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064424 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-usr-share-ca-certificates\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064445 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/777e73f7ad1ac012089ecaaa2fcbabab-etcd-certs\") pod \"etcd-cert-expiration-729000\" (UID: \"777e73f7ad1ac012089ecaaa2fcbabab\") " pod="kube-system/etcd-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064463 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f7b59a0f3e7bb502743e8d8e38b7d9a-k8s-certs\") pod \"kube-apiserver-cert-expiration-729000\" (UID: \"3f7b59a0f3e7bb502743e8d8e38b7d9a\") " pod="kube-system/kube-apiserver-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064528 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f7b59a0f3e7bb502743e8d8e38b7d9a-usr-share-ca-certificates\") pod \"kube-apiserver-cert-expiration-729000\" (UID: \"3f7b59a0f3e7bb502743e8d8e38b7d9a\") " pod="kube-system/kube-apiserver-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: E0128 04:05:54.149999 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-cert-expiration-729000\" already exists" pod="kube-system/kube-scheduler-cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.551571 2556 kubelet_node_status.go:108] "Node was previously registered" node="cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.551670 2556 kubelet_node_status.go:73] "Successfully registered node" node="cert-expiration-729000"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.747578 2556 apiserver.go:52] "Watching apiserver"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.963777 2556 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.979374 2556 reconciler.go:41] "Reconciler: start to sync state"
Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: I0128 04:05:55.283585 2556 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: E0128 04:05:55.350447 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cert-expiration-729000\" already exists" pod="kube-system/kube-apiserver-cert-expiration-729000"
Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: E0128 04:05:55.549473 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-cert-expiration-729000\" already exists" pod="kube-system/kube-scheduler-cert-expiration-729000"
Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: E0128 04:05:55.747387 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"etcd-cert-expiration-729000\" already exists" pod="kube-system/etcd-cert-expiration-729000"
Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: I0128 04:05:55.946415 2556 request.go:690] Waited for 1.08926386s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
Jan 28 04:05:56 cert-expiration-729000 kubelet[2556]: E0128 04:05:56.002997 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-cert-expiration-729000\" already exists" pod="kube-system/kube-controller-manager-cert-expiration-729000"
Jan 28 04:05:56 cert-expiration-729000 kubelet[2556]: I0128 04:05:56.547651 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-cert-expiration-729000" podStartSLOduration=3.5476181650000003 pod.CreationTimestamp="2023-01-28 04:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 04:05:56.210086978 +0000 UTC m=+2.565195159" watchObservedRunningTime="2023-01-28 04:05:56.547618165 +0000 UTC m=+2.902726329"
Jan 28 04:05:56 cert-expiration-729000 kubelet[2556]: I0128 04:05:56.996747 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-cert-expiration-729000" podStartSLOduration=3.996717603 pod.CreationTimestamp="2023-01-28 04:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 04:05:56.548333387 +0000 UTC m=+2.903441558" watchObservedRunningTime="2023-01-28 04:05:56.996717603 +0000 UTC m=+3.351825783"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p cert-expiration-729000 -n cert-expiration-729000
helpers_test.go:261: (dbg) Run: kubectl --context cert-expiration-729000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-controller-manager-cert-expiration-729000 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestCertExpiration]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context cert-expiration-729000 describe pod kube-controller-manager-cert-expiration-729000 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context cert-expiration-729000 describe pod kube-controller-manager-cert-expiration-729000 storage-provisioner: exit status 1 (47.282937ms)
** stderr **
Error from server (NotFound): pods "kube-controller-manager-cert-expiration-729000" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:279: kubectl --context cert-expiration-729000 describe pod kube-controller-manager-cert-expiration-729000 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "cert-expiration-729000" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-darwin-amd64 delete -p cert-expiration-729000
=== CONT TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-729000: (5.274928332s)
--- FAIL: TestCertExpiration (232.10s)