=== RUN TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 pause -p newest-cni-006868 --alsologtostderr -v=1
E0805 18:31:48.750436 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:48.755699 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:48.766043 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:48.786349 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:48.827265 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-006868 --alsologtostderr -v=1: (1.140032285s)
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006868 -n newest-cni-006868
E0805 18:31:48.908007 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:49.068440 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:49.389094 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:50.030075 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:51.311118 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:53.871534 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:31:56.049926 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:56.055293 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:56.065619 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:56.086358 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:56.127489 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:56.207861 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:56.368827 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:56.689954 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:57.330975 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:58.612002 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:31:58.991753 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:32:01.172963 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006868 -n newest-cni-006868: exit status 2 (15.720449942s)
-- stdout --
Stopped
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-006868 -n newest-cni-006868
E0805 18:32:06.293597 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:32:09.232071 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/enable-default-cni-837376/client.crt: no such file or directory
E0805 18:32:13.523461 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/calico-837376/client.crt: no such file or directory
E0805 18:32:16.533750 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/false-837376/client.crt: no such file or directory
E0805 18:32:20.159121 12581 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/custom-flannel-837376/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-006868 -n newest-cni-006868: exit status 2 (15.840138789s)
-- stdout --
Stopped
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 unpause -p newest-cni-006868 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006868 -n newest-cni-006868
start_stop_delete_test.go:311: (dbg) Run: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-006868 -n newest-cni-006868
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006868 -n newest-cni-006868
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p newest-cni-006868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-006868 logs -n 25: (2.197133875s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| ssh | -p kubenet-837376 sudo | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p kubenet-837376 sudo | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:29 UTC |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p kubenet-837376 sudo find | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:29 UTC |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p kubenet-837376 sudo crio | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:29 UTC |
| | config | | | | | |
| delete | -p kubenet-837376 | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:29 UTC |
| start | -p newest-cni-006868 --memory=2200 --alsologtostderr | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:30 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --kubernetes-version=v1.31.0-rc.0 | | | | | |
| addons | enable metrics-server -p no-preload-712347 | no-preload-712347 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:30 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-712347 | no-preload-712347 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:30 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-712347 | no-preload-712347 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:30 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-712347 | no-preload-712347 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --kubernetes-version=v1.31.0-rc.0 | | | | | |
| addons | enable metrics-server -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:30 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:31 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable metrics-server -p old-k8s-version-336753 | old-k8s-version-336753 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-336753 | old-k8s-version-336753 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-006868 --memory=2200 --alsologtostderr | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --kubernetes-version=v1.31.0-rc.0 | | | | | |
| addons | enable metrics-server -p default-k8s-diff-port-466451 | default-k8s-diff-port-466451 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | default-k8s-diff-port-466451 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | default-k8s-diff-port-466451 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-336753 | old-k8s-version-336753 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-336753 | old-k8s-version-336753 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-466451 | default-k8s-diff-port-466451 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-466451 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | |
| | default-k8s-diff-port-466451 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=kvm2 | | | | | |
| | --kubernetes-version=v1.30.3 | | | | | |
| image | newest-cni-006868 image list | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:32 UTC | 05 Aug 24 18:32 UTC |
| | --alsologtostderr -v=1 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/05 18:31:28
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0805 18:31:28.038157 69364 out.go:291] Setting OutFile to fd 1 ...
I0805 18:31:28.038253 69364 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 18:31:28.038260 69364 out.go:304] Setting ErrFile to fd 2...
I0805 18:31:28.038264 69364 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 18:31:28.038419 69364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19374-5415/.minikube/bin
I0805 18:31:28.038925 69364 out.go:298] Setting JSON to false
I0805 18:31:28.039800 69364 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4439,"bootTime":1722878249,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0805 18:31:28.039856 69364 start.go:139] virtualization: kvm guest
I0805 18:31:28.042022 69364 out.go:177] * [default-k8s-diff-port-466451] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0805 18:31:28.043520 69364 out.go:177] - MINIKUBE_LOCATION=19374
I0805 18:31:28.043534 69364 notify.go:220] Checking for updates...
I0805 18:31:28.046016 69364 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0805 18:31:28.047213 69364 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:31:28.048409 69364 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19374-5415/.minikube
I0805 18:31:28.049787 69364 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0805 18:31:28.051156 69364 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0805 18:31:28.052751 69364 config.go:182] Loaded profile config "default-k8s-diff-port-466451": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 18:31:28.053184 69364 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:28.053266 69364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:28.068452 69364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39137
I0805 18:31:28.068858 69364 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:28.069513 69364 main.go:141] libmachine: Using API Version 1
I0805 18:31:28.069543 69364 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:28.069905 69364 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:28.070126 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:31:28.070409 69364 driver.go:392] Setting default libvirt URI to qemu:///system
I0805 18:31:28.070823 69364 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:28.070866 69364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:28.085450 69364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
I0805 18:31:28.085866 69364 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:28.086295 69364 main.go:141] libmachine: Using API Version 1
I0805 18:31:28.086316 69364 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:28.086606 69364 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:28.086798 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:31:28.122706 69364 out.go:177] * Using the kvm2 driver based on existing profile
I0805 18:31:28.124009 69364 start.go:297] selected driver: kvm2
I0805 18:31:28.124026 69364 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-466451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-466451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.196 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] L
istenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:31:28.124158 69364 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0805 18:31:28.125122 69364 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 18:31:28.125213 69364 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19374-5415/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0805 18:31:28.141195 69364 install.go:137] /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
I0805 18:31:28.141617 69364 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0805 18:31:28.141683 69364 cni.go:84] Creating CNI manager for ""
I0805 18:31:28.141705 69364 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 18:31:28.141780 69364 start.go:340] cluster config:
{Name:default-k8s-diff-port-466451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-466451 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.196 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:31:28.141909 69364 iso.go:125] acquiring lock: {Name:mkad4f004e90cc668f8018dec3bb331fe9a9476c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 18:31:28.143783 69364 out.go:177] * Starting "default-k8s-diff-port-466451" primary control-plane node in "default-k8s-diff-port-466451" cluster
I0805 18:31:25.128627 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:25.129183 68956 main.go:141] libmachine: (newest-cni-006868) DBG | unable to find current IP address of domain newest-cni-006868 in network mk-newest-cni-006868
I0805 18:31:25.129212 68956 main.go:141] libmachine: (newest-cni-006868) DBG | I0805 18:31:25.129146 69008 retry.go:31] will retry after 3.693337981s: waiting for machine to come up
I0805 18:31:28.826260 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.826918 68956 main.go:141] libmachine: (newest-cni-006868) Found IP for machine: 192.168.39.154
I0805 18:31:28.826936 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has current primary IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.826942 68956 main.go:141] libmachine: (newest-cni-006868) Reserving static IP address...
I0805 18:31:28.827355 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "newest-cni-006868", mac: "52:54:00:1a:40:80", ip: "192.168.39.154"} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:28.827389 68956 main.go:141] libmachine: (newest-cni-006868) DBG | skip adding static IP to network mk-newest-cni-006868 - found existing host DHCP lease matching {name: "newest-cni-006868", mac: "52:54:00:1a:40:80", ip: "192.168.39.154"}
I0805 18:31:28.827402 68956 main.go:141] libmachine: (newest-cni-006868) Reserved static IP address: 192.168.39.154
I0805 18:31:28.827415 68956 main.go:141] libmachine: (newest-cni-006868) Waiting for SSH to be available...
I0805 18:31:28.827427 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Getting to WaitForSSH function...
I0805 18:31:28.829584 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.829917 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:28.829938 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.830019 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Using SSH client type: external
I0805 18:31:28.830060 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa (-rw-------)
I0805 18:31:28.830099 68956 main.go:141] libmachine: (newest-cni-006868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa -p 22] /usr/bin/ssh <nil>}
I0805 18:31:28.830126 68956 main.go:141] libmachine: (newest-cni-006868) DBG | About to run SSH command:
I0805 18:31:28.830138 68956 main.go:141] libmachine: (newest-cni-006868) DBG | exit 0
I0805 18:31:28.951524 68956 main.go:141] libmachine: (newest-cni-006868) DBG | SSH cmd err, output: <nil>:
I0805 18:31:28.951904 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetConfigRaw
I0805 18:31:28.952565 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetIP
I0805 18:31:28.955122 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.955524 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:28.955547 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.955788 68956 profile.go:143] Saving config to /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/config.json ...
I0805 18:31:28.955970 68956 machine.go:94] provisionDockerMachine start ...
I0805 18:31:28.955986 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:28.956195 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:28.958349 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.958680 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:28.958708 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.958835 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:28.959011 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:28.959173 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:28.959305 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:28.959469 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:28.959717 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:28.959735 68956 main.go:141] libmachine: About to run SSH command:
hostname
I0805 18:31:29.064086 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0805 18:31:29.064117 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetMachineName
I0805 18:31:29.064391 68956 buildroot.go:166] provisioning hostname "newest-cni-006868"
I0805 18:31:29.064418 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetMachineName
I0805 18:31:29.064622 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.067577 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.067960 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.067985 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.068122 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.068299 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.068484 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.068614 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.068785 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:29.068954 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:29.068966 68956 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-006868 && echo "newest-cni-006868" | sudo tee /etc/hostname
I0805 18:31:29.188912 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-006868
I0805 18:31:29.188943 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.191934 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.192363 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.192396 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.192612 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.192793 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.192972 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.193066 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.193233 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:29.193447 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:29.193474 68956 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-006868' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-006868/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-006868' | sudo tee -a /etc/hosts;
fi
fi
I0805 18:31:29.308113 68956 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0805 18:31:29.308142 68956 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19374-5415/.minikube CaCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19374-5415/.minikube}
I0805 18:31:29.308177 68956 buildroot.go:174] setting up certificates
I0805 18:31:29.308189 68956 provision.go:84] configureAuth start
I0805 18:31:29.308198 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetMachineName
I0805 18:31:29.308504 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetIP
I0805 18:31:29.311116 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.311512 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.311552 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.311671 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.313902 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.314283 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.314310 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.314447 68956 provision.go:143] copyHostCerts
I0805 18:31:29.314509 68956 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem, removing ...
I0805 18:31:29.314518 68956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem
I0805 18:31:29.314573 68956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem (1082 bytes)
I0805 18:31:29.314668 68956 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem, removing ...
I0805 18:31:29.314678 68956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem
I0805 18:31:29.314699 68956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem (1123 bytes)
I0805 18:31:29.314752 68956 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem, removing ...
I0805 18:31:29.314758 68956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem
I0805 18:31:29.314776 68956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem (1679 bytes)
I0805 18:31:29.314818 68956 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem org=jenkins.newest-cni-006868 san=[127.0.0.1 192.168.39.154 localhost minikube newest-cni-006868]
I0805 18:31:29.626177 68956 provision.go:177] copyRemoteCerts
I0805 18:31:29.626242 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0805 18:31:29.626265 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.629168 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.629519 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.629550 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.629752 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.629963 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.630115 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.630220 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:29.709270 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0805 18:31:29.732286 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0805 18:31:29.754900 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0805 18:31:29.777170 68956 provision.go:87] duration metric: took 468.966791ms to configureAuth
I0805 18:31:29.777209 68956 buildroot.go:189] setting minikube options for container-runtime
I0805 18:31:29.777423 68956 config.go:182] Loaded profile config "newest-cni-006868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
I0805 18:31:29.777447 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:29.777699 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.780158 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.780606 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.780634 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.780739 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.781023 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.781187 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.781341 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.781480 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:29.781632 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:29.781642 68956 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0805 18:31:32.096421 69206 start.go:364] duration metric: took 16.264802166s to acquireMachinesLock for "old-k8s-version-336753"
I0805 18:31:32.096529 69206 start.go:96] Skipping create...Using existing machine configuration
I0805 18:31:32.096537 69206 fix.go:54] fixHost starting:
I0805 18:31:32.096934 69206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:32.096975 69206 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:32.117552 69206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41391
I0805 18:31:32.118063 69206 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:32.118563 69206 main.go:141] libmachine: Using API Version 1
I0805 18:31:32.118588 69206 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:32.118901 69206 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:32.119065 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:32.119212 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetState
I0805 18:31:32.121000 69206 fix.go:112] recreateIfNeeded on old-k8s-version-336753: state=Stopped err=<nil>
I0805 18:31:32.121043 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
W0805 18:31:32.121213 69206 fix.go:138] unexpected machine state, will restart: <nil>
I0805 18:31:32.122980 69206 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-336753" ...
I0805 18:31:29.358263 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:31.358296 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:28.144980 69364 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 18:31:28.145020 69364 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
I0805 18:31:28.145029 69364 cache.go:56] Caching tarball of preloaded images
I0805 18:31:28.145125 69364 preload.go:172] Found /home/jenkins/minikube-integration/19374-5415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0805 18:31:28.145139 69364 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0805 18:31:28.145279 69364 profile.go:143] Saving config to /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/config.json ...
I0805 18:31:28.145532 69364 start.go:360] acquireMachinesLock for default-k8s-diff-port-466451: {Name:mk1b1146f745487d6dfed2753982366f4453f7d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 18:31:29.884741 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0805 18:31:29.884762 68956 buildroot.go:70] root file system type: tmpfs
I0805 18:31:29.884880 68956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0805 18:31:29.884903 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.887693 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.888039 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.888066 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.888204 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.888387 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.888681 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.888803 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.889002 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:29.889212 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:29.889297 68956 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0805 18:31:30.005557 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0805 18:31:30.005607 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:30.008585 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:30.009025 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:30.009050 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:30.009263 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:30.009483 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:30.009666 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:30.009822 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:30.010043 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:30.010254 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:30.010284 68956 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0805 18:31:31.857038 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0805 18:31:31.857069 68956 machine.go:97] duration metric: took 2.901087555s to provisionDockerMachine
I0805 18:31:31.857082 68956 start.go:293] postStartSetup for "newest-cni-006868" (driver="kvm2")
I0805 18:31:31.857095 68956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0805 18:31:31.857110 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:31.857400 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0805 18:31:31.857425 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:31.860431 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:31.860925 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:31.860951 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:31.861178 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:31.861360 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:31.861512 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:31.861684 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:31.943778 68956 ssh_runner.go:195] Run: cat /etc/os-release
I0805 18:31:31.948056 68956 info.go:137] Remote host: Buildroot 2023.02.9
I0805 18:31:31.948080 68956 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/addons for local assets ...
I0805 18:31:31.948142 68956 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/files for local assets ...
I0805 18:31:31.948260 68956 filesync.go:149] local asset: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem -> 125812.pem in /etc/ssl/certs
I0805 18:31:31.948384 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0805 18:31:31.962677 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:31:31.989252 68956 start.go:296] duration metric: took 132.156962ms for postStartSetup
I0805 18:31:31.989295 68956 fix.go:56] duration metric: took 21.476755377s for fixHost
I0805 18:31:31.989315 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:31.992010 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:31.992323 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:31.992348 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:31.992505 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:31.992771 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:31.993148 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:31.993357 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:31.993602 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:31.993791 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:31.993805 68956 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0805 18:31:32.096264 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722882692.070538782
I0805 18:31:32.096287 68956 fix.go:216] guest clock: 1722882692.070538782
I0805 18:31:32.096297 68956 fix.go:229] Guest: 2024-08-05 18:31:32.070538782 +0000 UTC Remote: 2024-08-05 18:31:31.989299358 +0000 UTC m=+27.213460656 (delta=81.239424ms)
I0805 18:31:32.096347 68956 fix.go:200] guest clock delta is within tolerance: 81.239424ms
I0805 18:31:32.096354 68956 start.go:83] releasing machines lock for "newest-cni-006868", held for 21.583845798s
I0805 18:31:32.096391 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:32.096678 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetIP
I0805 18:31:32.099510 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.099957 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:32.099985 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.100177 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:32.100746 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:32.100959 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:32.101061 68956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0805 18:31:32.101107 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:32.101196 68956 ssh_runner.go:195] Run: cat /version.json
I0805 18:31:32.101217 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:32.103924 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.104147 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.104314 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:32.104341 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.104505 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:32.104622 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:32.104653 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.104673 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:32.104846 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:32.104885 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:32.105020 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:32.105060 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:32.105300 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:32.105435 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:32.205355 68956 ssh_runner.go:195] Run: systemctl --version
I0805 18:31:32.211845 68956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0805 18:31:32.217396 68956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0805 18:31:32.217487 68956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0805 18:31:32.233535 68956 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0805 18:31:32.233563 68956 start.go:495] detecting cgroup driver to use...
I0805 18:31:32.233677 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:31:32.251991 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0805 18:31:32.262815 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0805 18:31:32.273954 68956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0805 18:31:32.274024 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0805 18:31:32.285204 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:31:32.296809 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0805 18:31:32.308942 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:31:32.321376 68956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0805 18:31:32.333130 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0805 18:31:32.343947 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0805 18:31:32.354925 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0805 18:31:32.366413 68956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0805 18:31:32.376515 68956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0805 18:31:32.387123 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:32.505648 68956 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0805 18:31:32.530207 68956 start.go:495] detecting cgroup driver to use...
I0805 18:31:32.530279 68956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0805 18:31:32.545812 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:31:32.563975 68956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0805 18:31:32.582644 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:31:32.600020 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:31:32.615404 68956 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0805 18:31:32.645943 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:31:32.661782 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:31:32.681377 68956 ssh_runner.go:195] Run: which cri-dockerd
I0805 18:31:32.686005 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0805 18:31:32.696284 68956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0805 18:31:32.713438 68956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0805 18:31:32.850020 68956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0805 18:31:32.992265 68956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0805 18:31:32.992412 68956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0805 18:31:33.012368 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:33.144357 68956 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 18:31:32.124304 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .Start
I0805 18:31:32.124531 69206 main.go:141] libmachine: (old-k8s-version-336753) Ensuring networks are active...
I0805 18:31:32.125343 69206 main.go:141] libmachine: (old-k8s-version-336753) Ensuring network default is active
I0805 18:31:32.125699 69206 main.go:141] libmachine: (old-k8s-version-336753) Ensuring network mk-old-k8s-version-336753 is active
I0805 18:31:32.126072 69206 main.go:141] libmachine: (old-k8s-version-336753) Getting domain xml...
I0805 18:31:32.126819 69206 main.go:141] libmachine: (old-k8s-version-336753) Creating domain...
I0805 18:31:33.414704 69206 main.go:141] libmachine: (old-k8s-version-336753) Waiting to get IP...
I0805 18:31:33.415652 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:33.416293 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:33.416386 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:33.416267 69427 retry.go:31] will retry after 255.011071ms: waiting for machine to come up
I0805 18:31:33.673148 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:33.673835 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:33.673878 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:33.673791 69427 retry.go:31] will retry after 373.631452ms: waiting for machine to come up
I0805 18:31:34.049506 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:34.049997 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:34.050029 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:34.049950 69427 retry.go:31] will retry after 392.215323ms: waiting for machine to come up
I0805 18:31:34.443438 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:34.444018 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:34.444044 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:34.443953 69427 retry.go:31] will retry after 608.331592ms: waiting for machine to come up
I0805 18:31:35.053500 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:35.054028 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:35.054051 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:35.053983 69427 retry.go:31] will retry after 716.029966ms: waiting for machine to come up
I0805 18:31:35.587036 68956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.44264544s)
I0805 18:31:35.587118 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0805 18:31:35.601344 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 18:31:35.616637 68956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0805 18:31:35.725430 68956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0805 18:31:35.855686 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:35.978285 68956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0805 18:31:35.996500 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 18:31:36.010473 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:36.138638 68956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0805 18:31:36.214673 68956 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0805 18:31:36.214755 68956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0805 18:31:36.220751 68956 start.go:563] Will wait 60s for crictl version
I0805 18:31:36.220814 68956 ssh_runner.go:195] Run: which crictl
I0805 18:31:36.224676 68956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0805 18:31:36.260941 68956 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0805 18:31:36.261031 68956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:31:36.285143 68956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:31:36.307748 68956 out.go:204] * Preparing Kubernetes v1.31.0-rc.0 on Docker 27.1.1 ...
I0805 18:31:36.307789 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetIP
I0805 18:31:36.310661 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:36.311042 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:36.311070 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:36.311254 68956 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0805 18:31:36.315204 68956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:31:36.330230 68956 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I0805 18:31:33.859336 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:35.864034 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:36.331272 68956 kubeadm.go:883] updating cluster {Name:newest-cni-006868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:newest-cni-006868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0805 18:31:36.331399 68956 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
I0805 18:31:36.331484 68956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:31:36.349695 68956 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-scheduler:v1.31.0-rc.0
registry.k8s.io/kube-apiserver:v1.31.0-rc.0
registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
registry.k8s.io/kube-proxy:v1.31.0-rc.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0805 18:31:36.349720 68956 docker.go:615] Images already preloaded, skipping extraction
I0805 18:31:36.349795 68956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:31:36.368958 68956 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.31.0-rc.0
registry.k8s.io/kube-scheduler:v1.31.0-rc.0
registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
registry.k8s.io/kube-proxy:v1.31.0-rc.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0805 18:31:36.368986 68956 cache_images.go:84] Images are preloaded, skipping loading
I0805 18:31:36.368998 68956 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.31.0-rc.0 docker true true} ...
I0805 18:31:36.369130 68956 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-006868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
[Install]
config:
{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-006868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0805 18:31:36.369203 68956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0805 18:31:36.421222 68956 cni.go:84] Creating CNI manager for ""
I0805 18:31:36.421263 68956 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 18:31:36.421280 68956 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
I0805 18:31:36.421311 68956 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-006868 NodeName:newest-cni-006868 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0805 18:31:36.421495 68956 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.154
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "newest-cni-006868"
kubeletExtraArgs:
node-ip: 192.168.39.154
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
feature-gates: "ServerSideApply=true"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
feature-gates: "ServerSideApply=true"
leader-elect: "false"
scheduler:
extraArgs:
feature-gates: "ServerSideApply=true"
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0-rc.0
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0805 18:31:36.421570 68956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
I0805 18:31:36.431938 68956 binaries.go:44] Found k8s binaries, skipping transfer
I0805 18:31:36.432001 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0805 18:31:36.441320 68956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
I0805 18:31:36.460733 68956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
I0805 18:31:36.480989 68956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
I0805 18:31:36.500828 68956 ssh_runner.go:195] Run: grep 192.168.39.154 control-plane.minikube.internal$ /etc/hosts
I0805 18:31:36.505320 68956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:31:36.517494 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:36.635749 68956 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 18:31:36.656881 68956 certs.go:68] Setting up /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868 for IP: 192.168.39.154
I0805 18:31:36.656902 68956 certs.go:194] generating shared ca certs ...
I0805 18:31:36.656924 68956 certs.go:226] acquiring lock for ca certs: {Name:mkd5950c6b2de2854a748470350a45601540dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:31:36.657099 68956 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key
I0805 18:31:36.657177 68956 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key
I0805 18:31:36.657192 68956 certs.go:256] generating profile certs ...
I0805 18:31:36.657305 68956 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/client.key
I0805 18:31:36.657390 68956 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/apiserver.key.b83b5c3d
I0805 18:31:36.657459 68956 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/proxy-client.key
I0805 18:31:36.657620 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem (1338 bytes)
W0805 18:31:36.657667 68956 certs.go:480] ignoring /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581_empty.pem, impossibly tiny 0 bytes
I0805 18:31:36.657681 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem (1679 bytes)
I0805 18:31:36.657716 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem (1082 bytes)
I0805 18:31:36.657761 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem (1123 bytes)
I0805 18:31:36.657794 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem (1679 bytes)
I0805 18:31:36.657870 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:31:36.658661 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0805 18:31:36.688339 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0805 18:31:36.717836 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0805 18:31:36.746025 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0805 18:31:36.777296 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0805 18:31:36.806710 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0805 18:31:36.840217 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0805 18:31:36.869941 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0805 18:31:36.895297 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0805 18:31:36.917901 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem --> /usr/share/ca-certificates/12581.pem (1338 bytes)
I0805 18:31:36.945953 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /usr/share/ca-certificates/125812.pem (1708 bytes)
I0805 18:31:36.970496 68956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0805 18:31:36.987008 68956 ssh_runner.go:195] Run: openssl version
I0805 18:31:36.992758 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0805 18:31:37.003532 68956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0805 18:31:37.007947 68956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 5 17:27 /usr/share/ca-certificates/minikubeCA.pem
I0805 18:31:37.008017 68956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0805 18:31:37.013763 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0805 18:31:37.024449 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12581.pem && ln -fs /usr/share/ca-certificates/12581.pem /etc/ssl/certs/12581.pem"
I0805 18:31:37.035265 68956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12581.pem
I0805 18:31:37.040098 68956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 5 17:34 /usr/share/ca-certificates/12581.pem
I0805 18:31:37.040169 68956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12581.pem
I0805 18:31:37.045922 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12581.pem /etc/ssl/certs/51391683.0"
I0805 18:31:37.056956 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125812.pem && ln -fs /usr/share/ca-certificates/125812.pem /etc/ssl/certs/125812.pem"
I0805 18:31:37.067712 68956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125812.pem
I0805 18:31:37.072276 68956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 5 17:34 /usr/share/ca-certificates/125812.pem
I0805 18:31:37.072338 68956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125812.pem
I0805 18:31:37.078029 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125812.pem /etc/ssl/certs/3ec20f2e.0"
I0805 18:31:37.088772 68956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0805 18:31:37.093194 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0805 18:31:37.098918 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0805 18:31:37.104393 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0805 18:31:37.110913 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0805 18:31:37.116610 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0805 18:31:37.122812 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0805 18:31:37.128599 68956 kubeadm.go:392] StartCluster: {Name:newest-cni-006868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:newest-cni-006868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:31:37.128719 68956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:31:37.146660 68956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0805 18:31:37.157633 68956 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0805 18:31:37.157659 68956 kubeadm.go:593] restartPrimaryControlPlane start ...
I0805 18:31:37.157710 68956 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0805 18:31:37.170098 68956 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0805 18:31:37.170797 68956 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-006868" does not appear in /home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:31:37.171095 68956 kubeconfig.go:62] /home/jenkins/minikube-integration/19374-5415/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-006868" cluster setting kubeconfig missing "newest-cni-006868" context setting]
I0805 18:31:37.171791 68956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19374-5415/kubeconfig: {Name:mk625b9ea6f09360b6a4e9f50277b2927e24bcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:31:37.173305 68956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0805 18:31:37.183424 68956 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.154
I0805 18:31:37.183465 68956 kubeadm.go:1160] stopping kube-system containers ...
I0805 18:31:37.183523 68956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:31:37.204002 68956 docker.go:483] Stopping containers: [29e5fb15e683 ee3ce586dc31 863f972a9657 5134dfdc0d50 fbf011536a75 034b8846cf12 21fb70992ec3 cac43aae145f c9b4e9b85518 067a823d9b94 e572d9a1938b 76015da0e4b2 ada8531e09d7 49ed690f0f0c 50297a33ca66 f4c413a7965b 37ceea586604]
I0805 18:31:37.204086 68956 ssh_runner.go:195] Run: docker stop 29e5fb15e683 ee3ce586dc31 863f972a9657 5134dfdc0d50 fbf011536a75 034b8846cf12 21fb70992ec3 cac43aae145f c9b4e9b85518 067a823d9b94 e572d9a1938b 76015da0e4b2 ada8531e09d7 49ed690f0f0c 50297a33ca66 f4c413a7965b 37ceea586604
I0805 18:31:37.225342 68956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0805 18:31:37.241886 68956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0805 18:31:37.251158 68956 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0805 18:31:37.251181 68956 kubeadm.go:157] found existing configuration files:
I0805 18:31:37.251227 68956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0805 18:31:37.260004 68956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0805 18:31:37.260077 68956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0805 18:31:37.269615 68956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0805 18:31:37.279028 68956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0805 18:31:37.279103 68956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0805 18:31:37.288499 68956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0805 18:31:37.297538 68956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0805 18:31:37.297594 68956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0805 18:31:37.307369 68956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0805 18:31:37.316530 68956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0805 18:31:37.316595 68956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0805 18:31:37.326160 68956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0805 18:31:37.335282 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:37.468295 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:38.183278 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:38.428796 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:38.497143 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:38.585468 68956 api_server.go:52] waiting for apiserver process to appear ...
I0805 18:31:38.585558 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:39.085840 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:39.585942 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:35.771516 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:35.772025 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:35.772054 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:35.771977 69427 retry.go:31] will retry after 929.312732ms: waiting for machine to come up
I0805 18:31:36.703090 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:36.703733 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:36.703767 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:36.703685 69427 retry.go:31] will retry after 926.726893ms: waiting for machine to come up
I0805 18:31:37.632365 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:37.632942 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:37.632964 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:37.632900 69427 retry.go:31] will retry after 1.291343117s: waiting for machine to come up
I0805 18:31:38.926669 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:38.927129 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:38.927149 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:38.927101 69427 retry.go:31] will retry after 1.830445372s: waiting for machine to come up
I0805 18:31:38.358645 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:40.359280 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:42.359800 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:40.086662 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:40.105979 68956 api_server.go:72] duration metric: took 1.520510323s to wait for apiserver process to appear ...
I0805 18:31:40.106008 68956 api_server.go:88] waiting for apiserver healthz status ...
I0805 18:31:40.106050 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:42.459584 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0805 18:31:42.459614 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0805 18:31:42.459638 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:42.620386 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 18:31:42.620425 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 18:31:42.620442 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:42.645980 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 18:31:42.646012 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 18:31:43.106199 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:43.120222 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 18:31:43.120246 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 18:31:43.606910 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:43.612862 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 18:31:43.612894 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 18:31:44.106209 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:44.111907 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
ok
I0805 18:31:44.119294 68956 api_server.go:141] control plane version: v1.31.0-rc.0
I0805 18:31:44.119320 68956 api_server.go:131] duration metric: took 4.01330617s to wait for apiserver health ...
I0805 18:31:44.119328 68956 cni.go:84] Creating CNI manager for ""
I0805 18:31:44.119339 68956 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 18:31:44.121423 68956 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0805 18:31:44.122803 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0805 18:31:44.133137 68956 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0805 18:31:44.150947 68956 system_pods.go:43] waiting for kube-system pods to appear ...
I0805 18:31:44.163475 68956 system_pods.go:59] 9 kube-system pods found
I0805 18:31:44.163513 68956 system_pods.go:61] "coredns-6f6b679f8f-88m8m" [864943b9-5315-452b-a31a-85db981929ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 18:31:44.163521 68956 system_pods.go:61] "coredns-6f6b679f8f-8lr5f" [c562efab-4c2c-415a-908a-1a8dbb1c8070] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 18:31:44.163529 68956 system_pods.go:61] "etcd-newest-cni-006868" [488a02a4-833a-4a75-8d8d-cdc43de28b87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0805 18:31:44.163535 68956 system_pods.go:61] "kube-apiserver-newest-cni-006868" [967e63e5-3b01-4e52-877d-1ae933940f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0805 18:31:44.163541 68956 system_pods.go:61] "kube-controller-manager-newest-cni-006868" [a86570e6-192e-4833-bda2-c00d1d0c1ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0805 18:31:44.163545 68956 system_pods.go:61] "kube-proxy-xqx9t" [7569998c-3a39-42a8-ab1d-e146b5179424] Running
I0805 18:31:44.163550 68956 system_pods.go:61] "kube-scheduler-newest-cni-006868" [c74a0c10-6d7b-4e99-bcbd-a7a603c0dc4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0805 18:31:44.163555 68956 system_pods.go:61] "metrics-server-6867b74b74-nbp4v" [6ed58f0d-6054-473c-971e-c2269a8c059b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0805 18:31:44.163563 68956 system_pods.go:61] "storage-provisioner" [f8983c9e-ebbc-44da-bccc-cee486a01c95] Running
I0805 18:31:44.163569 68956 system_pods.go:74] duration metric: took 12.604099ms to wait for pod list to return data ...
I0805 18:31:44.163578 68956 node_conditions.go:102] verifying NodePressure condition ...
I0805 18:31:44.167998 68956 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0805 18:31:44.168022 68956 node_conditions.go:123] node cpu capacity is 2
I0805 18:31:44.168033 68956 node_conditions.go:105] duration metric: took 4.451013ms to run NodePressure ...
I0805 18:31:44.168050 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:44.424930 68956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0805 18:31:44.436629 68956 ops.go:34] apiserver oom_adj: -16
I0805 18:31:44.436658 68956 kubeadm.go:597] duration metric: took 7.278991861s to restartPrimaryControlPlane
I0805 18:31:44.436669 68956 kubeadm.go:394] duration metric: took 7.308078248s to StartCluster
I0805 18:31:44.436687 68956 settings.go:142] acquiring lock: {Name:mka55bc46b2003e604f2001e767e118228a1c7ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:31:44.436770 68956 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:31:44.437729 68956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19374-5415/kubeconfig: {Name:mk625b9ea6f09360b6a4e9f50277b2927e24bcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:31:44.437989 68956 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0805 18:31:44.438046 68956 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0805 18:31:44.438120 68956 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-006868"
I0805 18:31:44.438142 68956 addons.go:69] Setting default-storageclass=true in profile "newest-cni-006868"
I0805 18:31:44.438163 68956 addons.go:69] Setting dashboard=true in profile "newest-cni-006868"
I0805 18:31:44.438185 68956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-006868"
I0805 18:31:44.438195 68956 addons.go:234] Setting addon dashboard=true in "newest-cni-006868"
W0805 18:31:44.438203 68956 addons.go:243] addon dashboard should already be in state true
I0805 18:31:44.438182 68956 addons.go:69] Setting metrics-server=true in profile "newest-cni-006868"
I0805 18:31:44.438277 68956 addons.go:234] Setting addon metrics-server=true in "newest-cni-006868"
W0805 18:31:44.438297 68956 addons.go:243] addon metrics-server should already be in state true
I0805 18:31:44.438229 68956 config.go:182] Loaded profile config "newest-cni-006868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
I0805 18:31:44.438355 68956 host.go:66] Checking if "newest-cni-006868" exists ...
I0805 18:31:44.438234 68956 host.go:66] Checking if "newest-cni-006868" exists ...
I0805 18:31:44.438490 68956 cache.go:107] acquiring lock: {Name:mk08cdb5b35c2969a80271638168f940d6cf8598 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 18:31:44.438155 68956 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-006868"
W0805 18:31:44.438551 68956 addons.go:243] addon storage-provisioner should already be in state true
I0805 18:31:44.438574 68956 cache.go:115] /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0805 18:31:44.438580 68956 host.go:66] Checking if "newest-cni-006868" exists ...
I0805 18:31:44.438589 68956 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 135.701µs
I0805 18:31:44.438606 68956 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0805 18:31:44.438614 68956 cache.go:87] Successfully saved all images to host disk.
I0805 18:31:44.438647 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.438702 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.438721 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.438742 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.438794 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.438819 68956 config.go:182] Loaded profile config "newest-cni-006868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
I0805 18:31:44.438880 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.438925 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.438956 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.439218 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.439249 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.440001 68956 out.go:177] * Verifying Kubernetes components...
I0805 18:31:44.441326 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:44.456512 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
I0805 18:31:44.456542 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
I0805 18:31:44.457322 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.457393 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.457867 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.457891 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.458016 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.458044 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.458268 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.458374 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.458447 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.458563 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
I0805 18:31:44.458689 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45745
I0805 18:31:44.458963 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.458980 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.458987 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.459361 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.459385 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.459401 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.459889 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.459908 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.459965 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.460323 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.460489 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.460555 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
I0805 18:31:44.460743 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.460877 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.460919 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.460934 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.461423 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.461444 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.461747 68956 addons.go:234] Setting addon default-storageclass=true in "newest-cni-006868"
W0805 18:31:44.461768 68956 addons.go:243] addon default-storageclass should already be in state true
I0805 18:31:44.461796 68956 host.go:66] Checking if "newest-cni-006868" exists ...
I0805 18:31:44.461919 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.462115 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.462146 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.462188 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.464332 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.464371 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.477087 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43237
I0805 18:31:44.478815 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
I0805 18:31:44.479170 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.479309 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.479875 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.479896 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.480036 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.480051 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.480425 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.480613 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.480682 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.480947 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
I0805 18:31:44.481239 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.481874 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.482424 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.482440 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.482871 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.483348 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
I0805 18:31:44.483546 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35729
I0805 18:31:44.483950 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.484098 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.484145 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.484211 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.484291 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.484424 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.484447 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.484847 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.484872 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.484932 68956 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0805 18:31:44.485020 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.485385 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.485507 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.485717 68956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:31:44.485743 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.485852 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.485995 68956 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0805 18:31:44.486043 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.486099 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.486125 68956 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0805 18:31:44.486144 68956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0805 18:31:44.486160 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.487359 68956 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0805 18:31:44.488516 68956 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0805 18:31:44.488581 68956 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0805 18:31:44.488595 68956 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0805 18:31:44.488613 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.489340 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.489592 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0805 18:31:44.489609 68956 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0805 18:31:44.489626 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.490276 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.490301 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.490326 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.490346 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.490364 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.490398 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.490603 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.491270 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.491035 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.492146 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.492184 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.492380 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.492672 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.492989 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.493323 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.493384 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.493423 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.493552 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.493803 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.493829 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.493859 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.493984 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.494104 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.494143 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.494260 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.494274 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.494374 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.528998 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41559
I0805 18:31:44.529488 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.530050 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.530069 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.530415 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.530609 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.532093 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.532310 68956 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0805 18:31:44.532326 68956 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0805 18:31:44.532343 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.535738 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.536303 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.536333 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.536550 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.536736 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.536908 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.537026 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.718357 68956 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 18:31:44.735895 68956 api_server.go:52] waiting for apiserver process to appear ...
I0805 18:31:44.735980 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:44.754478 68956 api_server.go:72] duration metric: took 316.452691ms to wait for apiserver process to appear ...
I0805 18:31:44.754507 68956 api_server.go:88] waiting for apiserver healthz status ...
I0805 18:31:44.754526 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:44.764390 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
ok
I0805 18:31:44.766297 68956 api_server.go:141] control plane version: v1.31.0-rc.0
I0805 18:31:44.766325 68956 api_server.go:131] duration metric: took 11.810001ms to wait for apiserver health ...
I0805 18:31:44.766335 68956 system_pods.go:43] waiting for kube-system pods to appear ...
I0805 18:31:44.779739 68956 system_pods.go:59] 9 kube-system pods found
I0805 18:31:44.779771 68956 system_pods.go:61] "coredns-6f6b679f8f-88m8m" [864943b9-5315-452b-a31a-85db981929ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 18:31:44.779778 68956 system_pods.go:61] "coredns-6f6b679f8f-8lr5f" [c562efab-4c2c-415a-908a-1a8dbb1c8070] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 18:31:44.779785 68956 system_pods.go:61] "etcd-newest-cni-006868" [488a02a4-833a-4a75-8d8d-cdc43de28b87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0805 18:31:44.779791 68956 system_pods.go:61] "kube-apiserver-newest-cni-006868" [967e63e5-3b01-4e52-877d-1ae933940f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0805 18:31:44.779805 68956 system_pods.go:61] "kube-controller-manager-newest-cni-006868" [a86570e6-192e-4833-bda2-c00d1d0c1ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0805 18:31:44.779809 68956 system_pods.go:61] "kube-proxy-xqx9t" [7569998c-3a39-42a8-ab1d-e146b5179424] Running
I0805 18:31:44.779814 68956 system_pods.go:61] "kube-scheduler-newest-cni-006868" [c74a0c10-6d7b-4e99-bcbd-a7a603c0dc4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0805 18:31:44.779819 68956 system_pods.go:61] "metrics-server-6867b74b74-nbp4v" [6ed58f0d-6054-473c-971e-c2269a8c059b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0805 18:31:44.779823 68956 system_pods.go:61] "storage-provisioner" [f8983c9e-ebbc-44da-bccc-cee486a01c95] Running
I0805 18:31:44.779830 68956 system_pods.go:74] duration metric: took 13.488547ms to wait for pod list to return data ...
I0805 18:31:44.779839 68956 default_sa.go:34] waiting for default service account to be created ...
I0805 18:31:44.782766 68956 default_sa.go:45] found service account: "default"
I0805 18:31:44.782788 68956 default_sa.go:55] duration metric: took 2.943139ms for default service account to be created ...
I0805 18:31:44.782798 68956 kubeadm.go:582] duration metric: took 344.779681ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0805 18:31:44.782813 68956 node_conditions.go:102] verifying NodePressure condition ...
I0805 18:31:44.785159 68956 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0805 18:31:44.785178 68956 node_conditions.go:123] node cpu capacity is 2
I0805 18:31:44.785186 68956 node_conditions.go:105] duration metric: took 2.369979ms to run NodePressure ...
I0805 18:31:44.785197 68956 start.go:241] waiting for startup goroutines ...
I0805 18:31:40.759021 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:40.759678 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:40.759707 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:40.759641 69427 retry.go:31] will retry after 1.434861666s: waiting for machine to come up
I0805 18:31:42.196378 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:42.196942 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:42.196972 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:42.196902 69427 retry.go:31] will retry after 2.088776544s: waiting for machine to come up
I0805 18:31:44.288249 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:44.288829 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:44.288862 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:44.288782 69427 retry.go:31] will retry after 3.416549781s: waiting for machine to come up
I0805 18:31:44.820922 68956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0805 18:31:44.868548 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0805 18:31:44.868574 68956 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0805 18:31:44.902495 68956 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0805 18:31:44.902519 68956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0805 18:31:44.915197 68956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0805 18:31:44.931007 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0805 18:31:44.931032 68956 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0805 18:31:44.966198 68956 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0805 18:31:44.966223 68956 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0805 18:31:44.984059 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0805 18:31:44.984085 68956 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0805 18:31:45.023462 68956 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0805 18:31:45.023490 68956 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0805 18:31:45.073509 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0805 18:31:45.073532 68956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0805 18:31:45.122090 68956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0805 18:31:45.270164 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0805 18:31:45.270188 68956 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0805 18:31:45.377617 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0805 18:31:45.377644 68956 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0805 18:31:45.404215 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.404243 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.404363 68956 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-scheduler:v1.31.0-rc.0
registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
registry.k8s.io/kube-apiserver:v1.31.0-rc.0
registry.k8s.io/kube-proxy:v1.31.0-rc.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0805 18:31:45.404386 68956 cache_images.go:84] Images are preloaded, skipping loading
I0805 18:31:45.404398 68956 cache_images.go:262] succeeded pushing to: newest-cni-006868
I0805 18:31:45.404419 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.404430 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.404538 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.404560 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.404577 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.404586 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.404711 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.404725 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.404733 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.404745 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.404713 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:45.404862 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.404901 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.404906 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:45.404983 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:45.405018 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.405029 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.413441 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.413470 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.413761 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:45.413773 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.413788 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.451012 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0805 18:31:45.451046 68956 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0805 18:31:45.474601 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0805 18:31:45.474629 68956 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0805 18:31:45.495882 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0805 18:31:45.495910 68956 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0805 18:31:45.536928 68956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0805 18:31:46.659511 68956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.744273283s)
I0805 18:31:46.659572 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:46.659587 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:46.659610 68956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.537483297s)
I0805 18:31:46.659670 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:46.659701 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:46.659924 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:46.659942 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:46.659953 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:46.659961 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:46.659960 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:46.659983 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:46.659996 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:46.660004 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:46.660282 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:46.660345 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:46.660376 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:46.660387 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:46.660397 68956 addons.go:475] Verifying addon metrics-server=true in "newest-cni-006868"
I0805 18:31:46.660441 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:46.660487 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:47.204841 68956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.667851253s)
I0805 18:31:47.204901 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:47.204915 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:47.205360 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:47.205402 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:47.205411 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:47.205420 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:47.205428 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:47.205746 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:47.205818 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:47.205844 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:47.207425 68956 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-006868 addons enable metrics-server
I0805 18:31:47.208785 68956 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I0805 18:31:47.210024 68956 addons.go:510] duration metric: took 2.7719791s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I0805 18:31:47.210060 68956 start.go:246] waiting for cluster config update ...
I0805 18:31:47.210075 68956 start.go:255] writing updated cluster config ...
I0805 18:31:47.210373 68956 ssh_runner.go:195] Run: rm -f paused
I0805 18:31:47.257668 68956 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
I0805 18:31:47.259793 68956 out.go:177] * Done! kubectl is now configured to use "newest-cni-006868" cluster and "default" namespace by default
I0805 18:31:44.858577 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:46.858945 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:47.706302 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:47.706773 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:47.706823 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:47.706747 69427 retry.go:31] will retry after 4.41727256s: waiting for machine to come up
I0805 18:31:49.357761 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:51.358591 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:55.196740 69364 start.go:364] duration metric: took 27.051156361s to acquireMachinesLock for "default-k8s-diff-port-466451"
I0805 18:31:55.196792 69364 start.go:96] Skipping create...Using existing machine configuration
I0805 18:31:55.196800 69364 fix.go:54] fixHost starting:
I0805 18:31:55.197234 69364 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:55.197282 69364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:55.217579 69364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
I0805 18:31:55.218027 69364 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:55.218575 69364 main.go:141] libmachine: Using API Version 1
I0805 18:31:55.218603 69364 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:55.218937 69364 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:55.219147 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:31:55.219352 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetState
I0805 18:31:55.221258 69364 fix.go:112] recreateIfNeeded on default-k8s-diff-port-466451: state=Stopped err=<nil>
I0805 18:31:55.221301 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
W0805 18:31:55.221485 69364 fix.go:138] unexpected machine state, will restart: <nil>
I0805 18:31:55.223722 69364 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-466451" ...
I0805 18:31:52.125529 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.126018 69206 main.go:141] libmachine: (old-k8s-version-336753) Found IP for machine: 192.168.61.245
I0805 18:31:52.126038 69206 main.go:141] libmachine: (old-k8s-version-336753) Reserving static IP address...
I0805 18:31:52.126048 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has current primary IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.126471 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "old-k8s-version-336753", mac: "52:54:00:54:bf:8c", ip: "192.168.61.245"} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.126510 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | skip adding static IP to network mk-old-k8s-version-336753 - found existing host DHCP lease matching {name: "old-k8s-version-336753", mac: "52:54:00:54:bf:8c", ip: "192.168.61.245"}
I0805 18:31:52.126523 69206 main.go:141] libmachine: (old-k8s-version-336753) Reserved static IP address: 192.168.61.245
I0805 18:31:52.126539 69206 main.go:141] libmachine: (old-k8s-version-336753) Waiting for SSH to be available...
I0805 18:31:52.126564 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | Getting to WaitForSSH function...
I0805 18:31:52.128944 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.129268 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.129299 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.129514 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | Using SSH client type: external
I0805 18:31:52.129531 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | Using SSH private key: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa (-rw-------)
I0805 18:31:52.129573 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa -p 22] /usr/bin/ssh <nil>}
I0805 18:31:52.129591 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | About to run SSH command:
I0805 18:31:52.129615 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | exit 0
I0805 18:31:52.251558 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | SSH cmd err, output: <nil>:
I0805 18:31:52.252043 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetConfigRaw
I0805 18:31:52.252697 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetIP
I0805 18:31:52.255356 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.255773 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.255799 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.256078 69206 profile.go:143] Saving config to /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/config.json ...
I0805 18:31:52.256257 69206 machine.go:94] provisionDockerMachine start ...
I0805 18:31:52.256275 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:52.256495 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.258621 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.258977 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.259003 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.259117 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.259297 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.259449 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.259605 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.259803 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:52.259994 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:52.260010 69206 main.go:141] libmachine: About to run SSH command:
hostname
I0805 18:31:52.356566 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0805 18:31:52.356598 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetMachineName
I0805 18:31:52.356850 69206 buildroot.go:166] provisioning hostname "old-k8s-version-336753"
I0805 18:31:52.356875 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetMachineName
I0805 18:31:52.357068 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.359750 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.360210 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.360252 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.360348 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.360558 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.360744 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.360925 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.361105 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:52.361260 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:52.361272 69206 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-336753 && echo "old-k8s-version-336753" | sudo tee /etc/hostname
I0805 18:31:52.475048 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-336753
I0805 18:31:52.475082 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.478157 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.478560 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.478599 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.478792 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.478997 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.479151 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.479301 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.479461 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:52.479641 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:52.479664 69206 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-336753' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-336753/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-336753' | sudo tee -a /etc/hosts;
fi
fi
I0805 18:31:52.584682 69206 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0805 18:31:52.584738 69206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19374-5415/.minikube CaCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19374-5415/.minikube}
I0805 18:31:52.584758 69206 buildroot.go:174] setting up certificates
I0805 18:31:52.584768 69206 provision.go:84] configureAuth start
I0805 18:31:52.584776 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetMachineName
I0805 18:31:52.585110 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetIP
I0805 18:31:52.587944 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.588310 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.588349 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.588500 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.591036 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.591480 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.591505 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.591728 69206 provision.go:143] copyHostCerts
I0805 18:31:52.591783 69206 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem, removing ...
I0805 18:31:52.591792 69206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem
I0805 18:31:52.591844 69206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem (1082 bytes)
I0805 18:31:52.591938 69206 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem, removing ...
I0805 18:31:52.591945 69206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem
I0805 18:31:52.591966 69206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem (1123 bytes)
I0805 18:31:52.592020 69206 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem, removing ...
I0805 18:31:52.592026 69206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem
I0805 18:31:52.592044 69206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem (1679 bytes)
I0805 18:31:52.592090 69206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-336753 san=[127.0.0.1 192.168.61.245 localhost minikube old-k8s-version-336753]
I0805 18:31:52.767859 69206 provision.go:177] copyRemoteCerts
I0805 18:31:52.767981 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0805 18:31:52.768017 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.772253 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.772696 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.772738 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.772914 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.773163 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.773349 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.773493 69206 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa Username:docker}
I0805 18:31:52.855319 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0805 18:31:52.878490 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0805 18:31:52.900455 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0805 18:31:52.922367 69206 provision.go:87] duration metric: took 337.58908ms to configureAuth
I0805 18:31:52.922397 69206 buildroot.go:189] setting minikube options for container-runtime
I0805 18:31:52.922584 69206 config.go:182] Loaded profile config "old-k8s-version-336753": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0805 18:31:52.922609 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:52.922897 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.925448 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.925857 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.925886 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.926051 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.926236 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.926383 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.926485 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.926655 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:52.926841 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:52.926854 69206 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0805 18:31:53.025041 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0805 18:31:53.025062 69206 buildroot.go:70] root file system type: tmpfs
I0805 18:31:53.025150 69206 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0805 18:31:53.025179 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:53.027866 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:53.028202 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:53.028235 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:53.028455 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:53.028665 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:53.028847 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:53.028949 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:53.029147 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:53.029324 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:53.029386 69206 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0805 18:31:53.141588 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0805 18:31:53.141627 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:53.144729 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:53.145117 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:53.145146 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:53.145463 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:53.145680 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:53.145865 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:53.145999 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:53.146127 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:53.146306 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:53.146324 69206 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0805 18:31:54.967065 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0805 18:31:54.967088 69206 machine.go:97] duration metric: took 2.710819429s to provisionDockerMachine
I0805 18:31:54.967100 69206 start.go:293] postStartSetup for "old-k8s-version-336753" (driver="kvm2")
I0805 18:31:54.967110 69206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0805 18:31:54.967134 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:54.967464 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0805 18:31:54.967490 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:54.970377 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:54.970839 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:54.970862 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:54.970998 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:54.971243 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:54.971421 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:54.971572 69206 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa Username:docker}
I0805 18:31:55.050716 69206 ssh_runner.go:195] Run: cat /etc/os-release
I0805 18:31:55.054990 69206 info.go:137] Remote host: Buildroot 2023.02.9
I0805 18:31:55.055023 69206 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/addons for local assets ...
I0805 18:31:55.055105 69206 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/files for local assets ...
I0805 18:31:55.055219 69206 filesync.go:149] local asset: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem -> 125812.pem in /etc/ssl/certs
I0805 18:31:55.055471 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0805 18:31:55.066732 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:31:55.090714 69206 start.go:296] duration metric: took 123.598653ms for postStartSetup
I0805 18:31:55.090762 69206 fix.go:56] duration metric: took 22.994225557s for fixHost
I0805 18:31:55.090781 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:55.093783 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.094193 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:55.094218 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.094447 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:55.094656 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:55.094850 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:55.095008 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:55.095161 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:55.095349 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:55.095362 69206 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0805 18:31:55.196538 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722882715.175783349
I0805 18:31:55.196565 69206 fix.go:216] guest clock: 1722882715.175783349
I0805 18:31:55.196575 69206 fix.go:229] Guest: 2024-08-05 18:31:55.175783349 +0000 UTC Remote: 2024-08-05 18:31:55.090766447 +0000 UTC m=+39.421747672 (delta=85.016902ms)
I0805 18:31:55.196598 69206 fix.go:200] guest clock delta is within tolerance: 85.016902ms
I0805 18:31:55.196603 69206 start.go:83] releasing machines lock for "old-k8s-version-336753", held for 23.100096865s
I0805 18:31:55.196628 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:55.196922 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetIP
I0805 18:31:55.200016 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.200424 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:55.200453 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.200685 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:55.201212 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:55.201402 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:55.201486 69206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0805 18:31:55.201526 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:55.201587 69206 ssh_runner.go:195] Run: cat /version.json
I0805 18:31:55.201611 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:55.204078 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.204389 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.204460 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:55.204487 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.204689 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:55.204786 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:55.204823 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.204860 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:55.204982 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:55.205052 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:55.205132 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:55.205192 69206 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa Username:docker}
I0805 18:31:55.205265 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:55.205379 69206 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa Username:docker}
I0805 18:31:55.304405 69206 ssh_runner.go:195] Run: systemctl --version
I0805 18:31:55.311560 69206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0805 18:31:55.318082 69206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0805 18:31:55.318187 69206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0805 18:31:55.329393 69206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0805 18:31:55.345088 69206 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0805 18:31:55.345122 69206 start.go:495] detecting cgroup driver to use...
I0805 18:31:55.345250 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:31:55.381351 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0805 18:31:55.392454 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0805 18:31:55.404966 69206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0805 18:31:55.405029 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0805 18:31:55.415739 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:31:55.426035 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0805 18:31:55.437024 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:31:55.448071 69206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0805 18:31:55.459828 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0805 18:31:55.470757 69206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0805 18:31:55.483022 69206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0805 18:31:55.495192 69206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:55.614841 69206 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0805 18:31:55.640006 69206 start.go:495] detecting cgroup driver to use...
I0805 18:31:55.640126 69206 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0805 18:31:55.655242 69206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:31:55.669017 69206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0805 18:31:55.686891 69206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:31:55.700698 69206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:31:53.857159 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:55.858324 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:55.225144 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .Start
I0805 18:31:55.225355 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Ensuring networks are active...
I0805 18:31:55.226131 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Ensuring network default is active
I0805 18:31:55.226477 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Ensuring network mk-default-k8s-diff-port-466451 is active
I0805 18:31:55.226847 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Getting domain xml...
I0805 18:31:55.227665 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Creating domain...
I0805 18:31:56.585890 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Waiting to get IP...
I0805 18:31:56.586971 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:56.587528 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:56.587647 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:56.587517 69763 retry.go:31] will retry after 201.625509ms: waiting for machine to come up
I0805 18:31:56.791230 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:56.792000 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:56.792020 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:56.791923 69763 retry.go:31] will retry after 330.212805ms: waiting for machine to come up
I0805 18:31:57.123497 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:57.124072 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:57.124096 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:57.124025 69763 retry.go:31] will retry after 402.812867ms: waiting for machine to come up
I0805 18:31:57.528659 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:57.529242 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:57.529271 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:57.529210 69763 retry.go:31] will retry after 561.907384ms: waiting for machine to come up
I0805 18:31:55.714682 69206 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0805 18:31:55.741014 69206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:31:55.755319 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:31:55.775684 69206 ssh_runner.go:195] Run: which cri-dockerd
I0805 18:31:55.779941 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0805 18:31:55.789342 69206 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0805 18:31:55.808377 69206 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0805 18:31:55.932404 69206 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0805 18:31:56.075906 69206 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0805 18:31:56.076042 69206 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0805 18:31:56.097153 69206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:56.217350 69206 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 18:31:58.659437 69206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.442043025s)
I0805 18:31:58.659516 69206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:31:58.690342 69206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:31:58.714352 69206 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 27.1.1 ...
I0805 18:31:58.714412 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetIP
I0805 18:31:58.717650 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:58.718109 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:58.718141 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:58.718381 69206 ssh_runner.go:195] Run: grep 192.168.61.1 host.minikube.internal$ /etc/hosts
I0805 18:31:58.722795 69206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:31:58.736421 69206 kubeadm.go:883] updating cluster {Name:old-k8s-version-336753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-336753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0805 18:31:58.736543 69206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0805 18:31:58.736587 69206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:31:58.758990 69206 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0805 18:31:58.759010 69206 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
I0805 18:31:58.759055 69206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0805 18:31:58.769968 69206 ssh_runner.go:195] Run: which lz4
I0805 18:31:58.775304 69206 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0805 18:31:58.780343 69206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0805 18:31:58.780374 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
I0805 18:32:00.198025 69206 docker.go:649] duration metric: took 1.422765501s to copy over tarball
I0805 18:32:00.198118 69206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0805 18:31:58.359449 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:00.359839 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:58.093445 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:58.094003 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:58.094036 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:58.093934 69763 retry.go:31] will retry after 569.068607ms: waiting for machine to come up
I0805 18:31:58.664259 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:58.664996 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:58.665030 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:58.664943 69763 retry.go:31] will retry after 844.153352ms: waiting for machine to come up
I0805 18:31:59.510670 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:59.511274 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:59.511303 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:59.511250 69763 retry.go:31] will retry after 1.040034813s: waiting for machine to come up
I0805 18:32:00.553440 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:00.554135 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:00.554167 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:00.554079 69763 retry.go:31] will retry after 1.210960125s: waiting for machine to come up
I0805 18:32:01.766775 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:01.767529 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:01.767560 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:01.767470 69763 retry.go:31] will retry after 1.822151774s: waiting for machine to come up
I0805 18:32:02.837145 69206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.638997048s)
I0805 18:32:02.837181 69206 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0805 18:32:02.878191 69206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0805 18:32:02.889381 69206 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2972 bytes)
I0805 18:32:02.906636 69206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:03.024121 69206 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 18:32:02.860395 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:05.379381 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:03.590935 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:03.591437 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:03.591472 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:03.591414 69763 retry.go:31] will retry after 1.723765385s: waiting for machine to come up
I0805 18:32:05.316828 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:05.317324 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:05.317350 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:05.317277 69763 retry.go:31] will retry after 2.077508001s: waiting for machine to come up
I0805 18:32:07.397710 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:07.398442 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:07.398485 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:07.398403 69763 retry.go:31] will retry after 2.45202207s: waiting for machine to come up
I0805 18:32:05.909404 69206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.885234302s)
I0805 18:32:05.909500 69206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:32:05.931711 69206 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0805 18:32:05.931735 69206 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
I0805 18:32:05.931743 69206 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
I0805 18:32:05.933268 69206 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0805 18:32:05.933514 69206 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
I0805 18:32:05.933732 69206 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:05.934045 69206 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0805 18:32:05.934122 69206 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0805 18:32:05.934263 69206 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
I0805 18:32:05.934411 69206 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:05.935029 69206 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:05.935058 69206 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:05.935254 69206 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0805 18:32:05.935533 69206 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:05.935620 69206 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
I0805 18:32:05.935772 69206 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:05.935814 69206 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:05.936611 69206 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
I0805 18:32:05.936771 69206 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:06.074619 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:06.093909 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
I0805 18:32:06.096435 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:06.098615 69206 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
I0805 18:32:06.098654 69206 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:06.098687 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:06.099664 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:06.110280 69206 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
I0805 18:32:06.110326 69206 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
I0805 18:32:06.110370 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
I0805 18:32:06.112470 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:06.119997 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
I0805 18:32:06.162118 69206 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
I0805 18:32:06.162170 69206 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:06.162221 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:06.175405 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
I0805 18:32:06.175525 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
I0805 18:32:06.175522 69206 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
I0805 18:32:06.175606 69206 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:06.175690 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:06.184770 69206 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
I0805 18:32:06.184827 69206 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:06.184879 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:06.198273 69206 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
I0805 18:32:06.198322 69206 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
I0805 18:32:06.198377 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
I0805 18:32:06.218138 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
I0805 18:32:06.218204 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
I0805 18:32:06.224276 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0805 18:32:06.224462 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
I0805 18:32:06.228029 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
I0805 18:32:06.242302 69206 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0805 18:32:06.242353 69206 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0805 18:32:06.242449 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0805 18:32:06.259267 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0805 18:32:06.550434 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0805 18:32:06.568355 69206 cache_images.go:92] duration metric: took 636.595171ms to LoadCachedImages
W0805 18:32:06.568493 69206 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
I0805 18:32:06.568511 69206 kubeadm.go:934] updating node { 192.168.61.245 8443 v1.20.0 docker true true} ...
I0805 18:32:06.568643 69206 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-336753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.245
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-336753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0805 18:32:06.568721 69206 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0805 18:32:06.627890 69206 cni.go:84] Creating CNI manager for ""
I0805 18:32:06.627934 69206 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0805 18:32:06.627946 69206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0805 18:32:06.627968 69206 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-336753 NodeName:old-k8s-version-336753 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0805 18:32:06.628110 69206 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.61.245
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "old-k8s-version-336753"
kubeletExtraArgs:
node-ip: 192.168.61.245
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.61.245"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0805 18:32:06.628169 69206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0805 18:32:06.638230 69206 binaries.go:44] Found k8s binaries, skipping transfer
I0805 18:32:06.638309 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0805 18:32:06.647838 69206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
I0805 18:32:06.665925 69206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0805 18:32:06.682790 69206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0805 18:32:06.700171 69206 ssh_runner.go:195] Run: grep 192.168.61.245 control-plane.minikube.internal$ /etc/hosts
I0805 18:32:06.703832 69206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.245 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:32:06.716333 69206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:06.839269 69206 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 18:32:06.858244 69206 certs.go:68] Setting up /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753 for IP: 192.168.61.245
I0805 18:32:06.858266 69206 certs.go:194] generating shared ca certs ...
I0805 18:32:06.858283 69206 certs.go:226] acquiring lock for ca certs: {Name:mkd5950c6b2de2854a748470350a45601540dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:32:06.858443 69206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key
I0805 18:32:06.858531 69206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key
I0805 18:32:06.858547 69206 certs.go:256] generating profile certs ...
I0805 18:32:06.858663 69206 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/client.key
I0805 18:32:06.858754 69206 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/apiserver.key.cc820c21
I0805 18:32:06.858806 69206 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/proxy-client.key
I0805 18:32:06.858961 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem (1338 bytes)
W0805 18:32:06.859002 69206 certs.go:480] ignoring /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581_empty.pem, impossibly tiny 0 bytes
I0805 18:32:06.859017 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem (1679 bytes)
I0805 18:32:06.859055 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem (1082 bytes)
I0805 18:32:06.859093 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem (1123 bytes)
I0805 18:32:06.859139 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem (1679 bytes)
I0805 18:32:06.859200 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:32:06.860050 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0805 18:32:06.915064 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0805 18:32:06.946956 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0805 18:32:06.984254 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0805 18:32:07.018204 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0805 18:32:07.054142 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0805 18:32:07.081443 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0805 18:32:07.108923 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0805 18:32:07.137147 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /usr/share/ca-certificates/125812.pem (1708 bytes)
I0805 18:32:07.167904 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0805 18:32:07.193564 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem --> /usr/share/ca-certificates/12581.pem (1338 bytes)
I0805 18:32:07.218353 69206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0805 18:32:07.235064 69206 ssh_runner.go:195] Run: openssl version
I0805 18:32:07.240645 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125812.pem && ln -fs /usr/share/ca-certificates/125812.pem /etc/ssl/certs/125812.pem"
I0805 18:32:07.251142 69206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125812.pem
I0805 18:32:07.255460 69206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 5 17:34 /usr/share/ca-certificates/125812.pem
I0805 18:32:07.255522 69206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125812.pem
I0805 18:32:07.261517 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125812.pem /etc/ssl/certs/3ec20f2e.0"
I0805 18:32:07.272659 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0805 18:32:07.284014 69206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:07.288617 69206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 5 17:27 /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:07.288674 69206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:07.294475 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0805 18:32:07.305197 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12581.pem && ln -fs /usr/share/ca-certificates/12581.pem /etc/ssl/certs/12581.pem"
I0805 18:32:07.315815 69206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12581.pem
I0805 18:32:07.320271 69206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 5 17:34 /usr/share/ca-certificates/12581.pem
I0805 18:32:07.320348 69206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12581.pem
I0805 18:32:07.325863 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12581.pem /etc/ssl/certs/51391683.0"
I0805 18:32:07.337310 69206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0805 18:32:07.341694 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0805 18:32:07.347758 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0805 18:32:07.353646 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0805 18:32:07.360573 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0805 18:32:07.366277 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0805 18:32:07.371972 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0805 18:32:07.377629 69206 kubeadm.go:392] StartCluster: {Name:old-k8s-version-336753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-336753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:32:07.377755 69206 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:32:07.399602 69206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0805 18:32:07.409778 69206 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0805 18:32:07.409797 69206 kubeadm.go:593] restartPrimaryControlPlane start ...
I0805 18:32:07.409868 69206 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0805 18:32:07.419469 69206 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0805 18:32:07.420169 69206 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-336753" does not appear in /home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:32:07.420515 69206 kubeconfig.go:62] /home/jenkins/minikube-integration/19374-5415/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-336753" cluster setting kubeconfig missing "old-k8s-version-336753" context setting]
I0805 18:32:07.421105 69206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19374-5415/kubeconfig: {Name:mk625b9ea6f09360b6a4e9f50277b2927e24bcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:32:07.422415 69206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0805 18:32:07.431823 69206 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.245
I0805 18:32:07.431855 69206 kubeadm.go:1160] stopping kube-system containers ...
I0805 18:32:07.431907 69206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:32:07.451423 69206 docker.go:483] Stopping containers: [e68122d164ec ce669064e09b 766797baaa4f fa41201b5f96 44149676ddea f91944446f59 9ed94d80b93d 690ac9b998c7 237dc0dd0e18 d2b74079b40b 1af3a8bc4cd6 16b126554787 e4b8eb5a542a 6365e48ae40b]
I0805 18:32:07.451503 69206 ssh_runner.go:195] Run: docker stop e68122d164ec ce669064e09b 766797baaa4f fa41201b5f96 44149676ddea f91944446f59 9ed94d80b93d 690ac9b998c7 237dc0dd0e18 d2b74079b40b 1af3a8bc4cd6 16b126554787 e4b8eb5a542a 6365e48ae40b
I0805 18:32:07.471832 69206 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0805 18:32:07.487257 69206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0805 18:32:07.497933 69206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0805 18:32:07.497966 69206 kubeadm.go:157] found existing configuration files:
I0805 18:32:07.498027 69206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0805 18:32:07.507792 69206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0805 18:32:07.507863 69206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0805 18:32:07.518079 69206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0805 18:32:07.527296 69206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0805 18:32:07.527349 69206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0805 18:32:07.537173 69206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0805 18:32:07.547338 69206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0805 18:32:07.547405 69206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0805 18:32:07.557712 69206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0805 18:32:07.567470 69206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0805 18:32:07.567539 69206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0805 18:32:07.577501 69206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0805 18:32:07.587276 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:07.754599 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:08.725201 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:08.995809 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:09.189775 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:09.385525 69206 api_server.go:52] waiting for apiserver process to appear ...
I0805 18:32:09.385628 69206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:32:09.886448 69206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:32:10.385757 69206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:32:07.858899 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:09.859165 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:12.360049 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:09.852915 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:09.853493 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:09.853526 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:09.853440 69763 retry.go:31] will retry after 4.448346046s: waiting for machine to come up
I0805 18:32:10.885726 69206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:32:10.913680 69206 api_server.go:72] duration metric: took 1.528156079s to wait for apiserver process to appear ...
I0805 18:32:10.913712 69206 api_server.go:88] waiting for apiserver healthz status ...
I0805 18:32:10.913739 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:10.914167 69206 api_server.go:269] stopped: https://192.168.61.245:8443/healthz: Get "https://192.168.61.245:8443/healthz": dial tcp 192.168.61.245:8443: connect: connection refused
I0805 18:32:11.414026 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:15.461466 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0805 18:32:15.461494 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0805 18:32:15.461505 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:15.487427 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0805 18:32:15.487458 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0805 18:32:14.859509 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:17.358250 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:14.305410 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.305964 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has current primary IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.305999 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Found IP for machine: 192.168.72.196
I0805 18:32:14.306013 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Reserving static IP address...
I0805 18:32:14.306514 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-466451", mac: "52:54:00:4d:3f:ba", ip: "192.168.72.196"} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.306550 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | skip adding static IP to network mk-default-k8s-diff-port-466451 - found existing host DHCP lease matching {name: "default-k8s-diff-port-466451", mac: "52:54:00:4d:3f:ba", ip: "192.168.72.196"}
I0805 18:32:14.306566 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Reserved static IP address: 192.168.72.196
I0805 18:32:14.306615 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Waiting for SSH to be available...
I0805 18:32:14.306661 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | Getting to WaitForSSH function...
I0805 18:32:14.308543 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.308917 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.308968 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.309203 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | Using SSH client type: external
I0805 18:32:14.309231 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | Using SSH private key: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa (-rw-------)
I0805 18:32:14.309257 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa -p 22] /usr/bin/ssh <nil>}
I0805 18:32:14.309274 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | About to run SSH command:
I0805 18:32:14.309289 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | exit 0
I0805 18:32:14.431741 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | SSH cmd err, output: <nil>:
I0805 18:32:14.432097 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetConfigRaw
I0805 18:32:14.432837 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetIP
I0805 18:32:14.435671 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.436109 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.436156 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.436428 69364 profile.go:143] Saving config to /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/config.json ...
I0805 18:32:14.436629 69364 machine.go:94] provisionDockerMachine start ...
I0805 18:32:14.436649 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:14.436922 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:14.439272 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.439651 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.439698 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.439778 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:14.439969 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.440144 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.440296 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:14.440454 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:14.440629 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:14.440640 69364 main.go:141] libmachine: About to run SSH command:
hostname
I0805 18:32:14.543996 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0805 18:32:14.544030 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetMachineName
I0805 18:32:14.544300 69364 buildroot.go:166] provisioning hostname "default-k8s-diff-port-466451"
I0805 18:32:14.544330 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetMachineName
I0805 18:32:14.544535 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:14.547476 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.547928 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.547963 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.548171 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:14.548403 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.548590 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.548775 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:14.548960 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:14.549183 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:14.549203 69364 main.go:141] libmachine: About to run SSH command:
sudo hostname default-k8s-diff-port-466451 && echo "default-k8s-diff-port-466451" | sudo tee /etc/hostname
I0805 18:32:14.661643 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-466451
I0805 18:32:14.661682 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:14.664635 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.665021 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.665064 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.665253 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:14.665448 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.665687 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.665904 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:14.666115 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:14.666292 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:14.666310 69364 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sdefault-k8s-diff-port-466451' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-466451/g' /etc/hosts;
else
echo '127.0.1.1 default-k8s-diff-port-466451' | sudo tee -a /etc/hosts;
fi
fi
I0805 18:32:14.777165 69364 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0805 18:32:14.777208 69364 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19374-5415/.minikube CaCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19374-5415/.minikube}
I0805 18:32:14.777258 69364 buildroot.go:174] setting up certificates
I0805 18:32:14.777274 69364 provision.go:84] configureAuth start
I0805 18:32:14.777292 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetMachineName
I0805 18:32:14.777624 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetIP
I0805 18:32:14.780382 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.780816 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.780846 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.781018 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:14.783718 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.784083 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.784098 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.784272 69364 provision.go:143] copyHostCerts
I0805 18:32:14.784325 69364 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem, removing ...
I0805 18:32:14.784334 69364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem
I0805 18:32:14.784402 69364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem (1123 bytes)
I0805 18:32:14.784533 69364 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem, removing ...
I0805 18:32:14.784551 69364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem
I0805 18:32:14.784574 69364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem (1679 bytes)
I0805 18:32:14.784624 69364 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem, removing ...
I0805 18:32:14.784631 69364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem
I0805 18:32:14.784648 69364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem (1082 bytes)
I0805 18:32:14.784721 69364 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-466451 san=[127.0.0.1 192.168.72.196 default-k8s-diff-port-466451 localhost minikube]
I0805 18:32:15.079259 69364 provision.go:177] copyRemoteCerts
I0805 18:32:15.079326 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0805 18:32:15.079354 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:15.082718 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.083129 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:15.083161 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.083322 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:15.083551 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.083745 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:15.083962 69364 sshutil.go:53] new ssh client: &{IP:192.168.72.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa Username:docker}
I0805 18:32:15.165749 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0805 18:32:15.190499 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0805 18:32:15.214983 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0805 18:32:15.240118 69364 provision.go:87] duration metric: took 462.826686ms to configureAuth
I0805 18:32:15.240156 69364 buildroot.go:189] setting minikube options for container-runtime
I0805 18:32:15.240385 69364 config.go:182] Loaded profile config "default-k8s-diff-port-466451": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 18:32:15.240413 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:15.240694 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:15.243334 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.243778 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:15.243804 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.244001 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:15.244200 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.244372 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.244490 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:15.244702 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:15.244915 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:15.244933 69364 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0805 18:32:15.345449 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0805 18:32:15.345481 69364 buildroot.go:70] root file system type: tmpfs
I0805 18:32:15.345608 69364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0805 18:32:15.345636 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:15.349419 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.349774 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:15.349822 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.350095 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:15.350290 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.350485 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.350616 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:15.350780 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:15.351013 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:15.351085 69364 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0805 18:32:15.468651 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0805 18:32:15.468678 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:15.471891 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.472304 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:15.472337 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.472593 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:15.472795 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.472972 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.473133 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:15.473319 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:15.473533 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:15.473562 69364 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0805 18:32:17.329517 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0805 18:32:17.329554 69364 machine.go:97] duration metric: took 2.892911259s to provisionDockerMachine
I0805 18:32:17.329569 69364 start.go:293] postStartSetup for "default-k8s-diff-port-466451" (driver="kvm2")
I0805 18:32:17.329580 69364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0805 18:32:17.329601 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.329958 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0805 18:32:17.329985 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:17.332926 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.333353 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.333387 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.333569 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:17.333774 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.333949 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:17.334088 69364 sshutil.go:53] new ssh client: &{IP:192.168.72.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa Username:docker}
I0805 18:32:17.414167 69364 ssh_runner.go:195] Run: cat /etc/os-release
I0805 18:32:17.418295 69364 info.go:137] Remote host: Buildroot 2023.02.9
I0805 18:32:17.418325 69364 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/addons for local assets ...
I0805 18:32:17.418399 69364 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/files for local assets ...
I0805 18:32:17.418516 69364 filesync.go:149] local asset: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem -> 125812.pem in /etc/ssl/certs
I0805 18:32:17.418642 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0805 18:32:17.429206 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:32:17.457903 69364 start.go:296] duration metric: took 128.31976ms for postStartSetup
I0805 18:32:17.457973 69364 fix.go:56] duration metric: took 22.261151421s for fixHost
I0805 18:32:17.457998 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:17.460758 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.461200 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.461231 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.461338 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:17.461569 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.461759 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.461907 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:17.462081 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:17.462298 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:17.462314 69364 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0805 18:32:17.565114 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722882737.539804707
I0805 18:32:17.565139 69364 fix.go:216] guest clock: 1722882737.539804707
I0805 18:32:17.565149 69364 fix.go:229] Guest: 2024-08-05 18:32:17.539804707 +0000 UTC Remote: 2024-08-05 18:32:17.45797871 +0000 UTC m=+49.453468695 (delta=81.825997ms)
I0805 18:32:17.565167 69364 fix.go:200] guest clock delta is within tolerance: 81.825997ms
I0805 18:32:17.565172 69364 start.go:83] releasing machines lock for "default-k8s-diff-port-466451", held for 22.368402757s
I0805 18:32:17.565191 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.565488 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetIP
I0805 18:32:17.568934 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.569306 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.569336 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.569641 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.570224 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.570449 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.570557 69364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0805 18:32:17.570601 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:17.570710 69364 ssh_runner.go:195] Run: cat /version.json
I0805 18:32:17.570759 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:17.573572 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.573877 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.573947 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.573973 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.574140 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:17.574360 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.574364 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.574413 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.574444 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:17.574559 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:17.574634 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.574712 69364 sshutil.go:53] new ssh client: &{IP:192.168.72.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa Username:docker}
I0805 18:32:17.574805 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:17.574940 69364 sshutil.go:53] new ssh client: &{IP:192.168.72.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa Username:docker}
I0805 18:32:17.678350 69364 ssh_runner.go:195] Run: systemctl --version
I0805 18:32:17.685777 69364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0805 18:32:17.692712 69364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0805 18:32:17.692793 69364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0805 18:32:17.711795 69364 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0805 18:32:17.711824 69364 start.go:495] detecting cgroup driver to use...
I0805 18:32:17.711954 69364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:32:17.731288 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0805 18:32:17.745925 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0805 18:32:17.756636 69364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0805 18:32:17.756726 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0805 18:32:17.768106 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:32:17.780797 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0805 18:32:17.794593 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:32:17.807124 69364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0805 18:32:17.817661 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0805 18:32:17.828041 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0805 18:32:17.839068 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0805 18:32:17.850068 69364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0805 18:32:17.859839 69364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0805 18:32:17.869726 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:17.997849 69364 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0805 18:32:18.024955 69364 start.go:495] detecting cgroup driver to use...
I0805 18:32:18.025032 69364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0805 18:32:15.914556 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:16.024697 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0805 18:32:16.024747 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0805 18:32:16.414227 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:16.430439 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0805 18:32:16.430483 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0805 18:32:16.914810 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:16.923950 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0805 18:32:16.923980 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0805 18:32:17.414633 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:17.422101 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
ok
I0805 18:32:17.431117 69206 api_server.go:141] control plane version: v1.20.0
I0805 18:32:17.431301 69206 api_server.go:131] duration metric: took 6.517577987s to wait for apiserver health ...
I0805 18:32:17.431480 69206 cni.go:84] Creating CNI manager for ""
I0805 18:32:17.431505 69206 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0805 18:32:17.431519 69206 system_pods.go:43] waiting for kube-system pods to appear ...
I0805 18:32:17.441539 69206 system_pods.go:59] 7 kube-system pods found
I0805 18:32:17.441565 69206 system_pods.go:61] "coredns-74ff55c5b-np6jj" [0d5e9a18-1480-4732-b21a-df2a982c5e4d] Running
I0805 18:32:17.441574 69206 system_pods.go:61] "etcd-old-k8s-version-336753" [5e1193a0-9fbb-4a53-b35e-f0d47c003742] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0805 18:32:17.441583 69206 system_pods.go:61] "kube-apiserver-old-k8s-version-336753" [d4b24340-8f76-4454-b4f4-366afcff1baa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0805 18:32:17.441589 69206 system_pods.go:61] "kube-controller-manager-old-k8s-version-336753" [ea7c5a0b-5bc8-4a3d-8319-76d1372d6140] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0805 18:32:17.441596 69206 system_pods.go:61] "kube-proxy-wsr6r" [d5fab68a-44c2-4740-ae33-5ce3884921e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0805 18:32:17.441602 69206 system_pods.go:61] "kube-scheduler-old-k8s-version-336753" [329f71f6-39db-4cf4-aa1e-aa555f5e787f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0805 18:32:17.441608 69206 system_pods.go:61] "storage-provisioner" [5bb92b05-c903-4c3d-a0fc-903e2bd0b9a5] Running
I0805 18:32:17.441614 69206 system_pods.go:74] duration metric: took 10.087626ms to wait for pod list to return data ...
I0805 18:32:17.441620 69206 node_conditions.go:102] verifying NodePressure condition ...
I0805 18:32:17.445294 69206 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0805 18:32:17.445319 69206 node_conditions.go:123] node cpu capacity is 2
I0805 18:32:17.445330 69206 node_conditions.go:105] duration metric: took 3.705521ms to run NodePressure ...
I0805 18:32:17.445345 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:17.833277 69206 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0805 18:32:17.837109 69206 kubeadm.go:739] kubelet initialised
I0805 18:32:17.837131 69206 kubeadm.go:740] duration metric: took 3.829485ms waiting for restarted kubelet to initialise ...
I0805 18:32:17.837138 69206 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0805 18:32:17.842936 69206 pod_ready.go:78] waiting up to 4m0s for pod "coredns-74ff55c5b-np6jj" in "kube-system" namespace to be "Ready" ...
I0805 18:32:19.849745 69206 pod_ready.go:102] pod "coredns-74ff55c5b-np6jj" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:18.040183 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:32:18.054408 69364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0805 18:32:18.073484 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:32:18.090445 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:32:18.108238 69364 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0805 18:32:18.138951 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:32:18.154636 69364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:32:18.173519 69364 ssh_runner.go:195] Run: which cri-dockerd
I0805 18:32:18.177517 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0805 18:32:18.186888 69364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0805 18:32:18.204195 69364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0805 18:32:18.316077 69364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0805 18:32:18.446649 69364 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0805 18:32:18.446844 69364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0805 18:32:18.464297 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:18.584434 69364 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 18:32:21.030295 69364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.445826481s)
I0805 18:32:21.030370 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0805 18:32:21.045090 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 18:32:21.061908 69364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0805 18:32:21.207689 69364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0805 18:32:21.338300 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:21.468001 69364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0805 18:32:21.491827 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 18:32:21.515257 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:21.673908 69364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0805 18:32:21.772773 69364 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0805 18:32:21.772848 69364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0805 18:32:21.779915 69364 start.go:563] Will wait 60s for crictl version
I0805 18:32:21.779976 69364 ssh_runner.go:195] Run: which crictl
I0805 18:32:21.785790 69364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0805 18:32:21.833059 69364 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0805 18:32:21.833125 69364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:32:21.864226 69364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
==> Docker <==
Aug 05 18:31:48 newest-cni-006868 dockerd[843]: time="2024-08-05T18:31:48.138090377Z" level=info msg="ignoring event" container=d4db473cd73ae2e9c3f31f6f5206c4023714df44a12940a215674671e531d303 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 05 18:31:48 newest-cni-006868 dockerd[849]: time="2024-08-05T18:31:48.139029828Z" level=info msg="shim disconnected" id=d4db473cd73ae2e9c3f31f6f5206c4023714df44a12940a215674671e531d303 namespace=moby
Aug 05 18:31:48 newest-cni-006868 dockerd[849]: time="2024-08-05T18:31:48.139094166Z" level=warning msg="cleaning up after shim disconnected" id=d4db473cd73ae2e9c3f31f6f5206c4023714df44a12940a215674671e531d303 namespace=moby
Aug 05 18:31:48 newest-cni-006868 dockerd[849]: time="2024-08-05T18:31:48.139103665Z" level=info msg="cleaning up dead shim" namespace=moby
Aug 05 18:31:48 newest-cni-006868 cri-dockerd[1111]: W0805 18:31:48.202049 1111 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Aug 05 18:32:20 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:20.717074809Z" level=info msg="shim disconnected" id=31b584e307bce12b7f3379ec6ac16f8b6d6c6252c94a3f4120b4b6999613ffca namespace=moby
Aug 05 18:32:20 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:20.718496453Z" level=warning msg="cleaning up after shim disconnected" id=31b584e307bce12b7f3379ec6ac16f8b6d6c6252c94a3f4120b4b6999613ffca namespace=moby
Aug 05 18:32:20 newest-cni-006868 dockerd[843]: time="2024-08-05T18:32:20.718854408Z" level=info msg="ignoring event" container=31b584e307bce12b7f3379ec6ac16f8b6d6c6252c94a3f4120b4b6999613ffca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 05 18:32:20 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:20.719201788Z" level=info msg="cleaning up dead shim" namespace=moby
Aug 05 18:32:20 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:20.801453696Z" level=warning msg="cleanup warnings time=\"2024-08-05T18:32:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
Aug 05 18:32:21 newest-cni-006868 cri-dockerd[1111]: time="2024-08-05T18:32:21Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-6f6b679f8f-88m8m_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"cac43aae145f023dabced2c535fd55774516126f0da427898d9d60330a663405\""
Aug 05 18:32:21 newest-cni-006868 cri-dockerd[1111]: time="2024-08-05T18:32:21Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-6f6b679f8f-8lr5f_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"21fb70992ec350e6403c3c8dcc681f9024379a324cd7a54ff232ab2d35e0486b\""
Aug 05 18:32:21 newest-cni-006868 cri-dockerd[1111]: time="2024-08-05T18:32:21Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.644677998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.644789180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.647674694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.650497424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.765445778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.765641785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.765665729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.765937713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.793762977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.793830242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.793841760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.793940392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
4f78f2304a62e 6e38f40d628db Less than a second ago Running storage-provisioner 2 ae5c15f7ceec5 storage-provisioner
31b584e307bce 6e38f40d628db 39 seconds ago Exited storage-provisioner 1 ae5c15f7ceec5 storage-provisioner
4e96aea33e5f1 41cec1c4af04c 39 seconds ago Running kube-proxy 1 4ed94f18e8a47 kube-proxy-xqx9t
22c23d2af5ee9 2e96e5913fc06 43 seconds ago Running etcd 1 276cab80ccbe5 etcd-newest-cni-006868
610674a9184a1 0fd085a247d6c 43 seconds ago Running kube-scheduler 1 d829aa7f88ced kube-scheduler-newest-cni-006868
240e0b6feefae fd01d5222f3a9 43 seconds ago Running kube-controller-manager 1 200aca1d3391b kube-controller-manager-newest-cni-006868
17fd99f38c2d0 c7883f2335b7c 43 seconds ago Running kube-apiserver 1 9105e1ecb0ffc kube-apiserver-newest-cni-006868
5134dfdc0d50d cbb01a7bd410d About a minute ago Exited coredns 0 21fb70992ec35 coredns-6f6b679f8f-8lr5f
fbf011536a75c cbb01a7bd410d About a minute ago Exited coredns 0 cac43aae145f0 coredns-6f6b679f8f-88m8m
034b8846cf12c 41cec1c4af04c About a minute ago Exited kube-proxy 0 c9b4e9b85518d kube-proxy-xqx9t
067a823d9b94a 2e96e5913fc06 About a minute ago Exited etcd 0 37ceea586604d etcd-newest-cni-006868
e572d9a1938b6 c7883f2335b7c About a minute ago Exited kube-apiserver 0 50297a33ca66c kube-apiserver-newest-cni-006868
76015da0e4b2d fd01d5222f3a9 About a minute ago Exited kube-controller-manager 0 49ed690f0f0cb kube-controller-manager-newest-cni-006868
ada8531e09d7d 0fd085a247d6c About a minute ago Exited kube-scheduler 0 f4c413a7965b4 kube-scheduler-newest-cni-006868
==> coredns [5134dfdc0d50] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [fbf011536a75] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: newest-cni-006868
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=newest-cni-006868
kubernetes.io/os=linux
minikube.k8s.io/commit=7ab1b4d76a5d87b75cd4b70be3ee81f93304b0ab
minikube.k8s.io/name=newest-cni-006868
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_05T18_30_37_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 05 Aug 2024 18:30:34 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: newest-cni-006868
AcquireTime: <unset>
RenewTime: Mon, 05 Aug 2024 18:32:21 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 05 Aug 2024 18:32:21 +0000 Mon, 05 Aug 2024 18:30:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 05 Aug 2024 18:32:21 +0000 Mon, 05 Aug 2024 18:30:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 05 Aug 2024 18:32:21 +0000 Mon, 05 Aug 2024 18:30:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 05 Aug 2024 18:32:21 +0000 Mon, 05 Aug 2024 18:31:46 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.154
Hostname: newest-cni-006868
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: f47f902bc8a2488abd68588a77f63f29
System UUID: f47f902b-c8a2-488a-bd68-588a77f63f29
Boot ID: 664702a7-1d42-4499-8048-4c37d4979011
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.31.0-rc.0
Kube-Proxy Version:
PodCIDR: 10.42.0.0/24
PodCIDRs: 10.42.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6f6b679f8f-8lr5f 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 102s
kube-system etcd-newest-cni-006868 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 108s
kube-system kube-apiserver-newest-cni-006868 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 107s
kube-system kube-controller-manager-newest-cni-006868 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 107s
kube-system kube-proxy-xqx9t 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 102s
kube-system kube-scheduler-newest-cni-006868 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 107s
kube-system metrics-server-6867b74b74-nbp4v 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (9%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 100s
kubernetes-dashboard dashboard-metrics-scraper-7c96f5b85b-98qnz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 37s
kubernetes-dashboard kubernetes-dashboard-695b96c756-9b4fb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 37s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING)
memory 370Mi (17%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 39s kube-proxy
Normal Starting 100s kube-proxy
Normal Starting 107s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 107s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 107s kubelet Node newest-cni-006868 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 107s kubelet Node newest-cni-006868 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 107s kubelet Node newest-cni-006868 status is now: NodeHasSufficientPID
Normal NodeReady 106s kubelet Node newest-cni-006868 status is now: NodeReady
Normal RegisteredNode 103s node-controller Node newest-cni-006868 event: Registered Node newest-cni-006868 in Controller
Normal NodeHasSufficientMemory 45s (x8 over 45s) kubelet Node newest-cni-006868 status is now: NodeHasSufficientMemory
Normal Starting 45s kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 45s (x8 over 45s) kubelet Node newest-cni-006868 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 45s (x7 over 45s) kubelet Node newest-cni-006868 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 45s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 38s node-controller Node newest-cni-006868 event: Registered Node newest-cni-006868 in Controller
Normal Starting 2s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2s kubelet Node newest-cni-006868 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2s kubelet Node newest-cni-006868 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2s kubelet Node newest-cni-006868 status is now: NodeHasSufficientPID
==> dmesg <==
[ +1.994774] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +2.348964] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.628880] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
[ +0.056698] kauditd_printk_skb: 1 callbacks suppressed
[ +0.060771] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
[ +2.144834] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
[ +0.344064] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
[ +0.139301] systemd-fstab-generator[821]: Ignoring "noauto" option for root device
[ +0.155881] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
[ +2.272763] kauditd_printk_skb: 195 callbacks suppressed
[ +0.317386] systemd-fstab-generator[1064]: Ignoring "noauto" option for root device
[ +0.120781] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
[ +0.121553] systemd-fstab-generator[1088]: Ignoring "noauto" option for root device
[ +0.169429] systemd-fstab-generator[1103]: Ignoring "noauto" option for root device
[ +0.490098] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
[ +1.771374] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
[ +4.769059] kauditd_printk_skb: 244 callbacks suppressed
[ +1.452513] systemd-fstab-generator[2048]: Ignoring "noauto" option for root device
[ +3.412008] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
[ +0.140967] kauditd_printk_skb: 115 callbacks suppressed
[ +0.594224] systemd-fstab-generator[2487]: Ignoring "noauto" option for root device
[Aug 5 18:32] kauditd_printk_skb: 27 callbacks suppressed
[ +0.131237] systemd-fstab-generator[2735]: Ignoring "noauto" option for root device
==> etcd [067a823d9b94] <==
{"level":"info","ts":"2024-08-05T18:30:32.238436Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T18:30:32.243753Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:newest-cni-006868 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-05T18:30:32.243940Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T18:30:32.246798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T18:30:32.254414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-05T18:30:32.254482Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-05T18:30:32.254841Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T18:30:32.256032Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T18:30:32.256085Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T18:30:32.261530Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:30:32.266221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-05T18:30:32.275835Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:30:32.277306Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
{"level":"info","ts":"2024-08-05T18:30:47.170034Z","caller":"traceutil/trace.go:171","msg":"trace[41647601] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"114.016854ms","start":"2024-08-05T18:30:47.054738Z","end":"2024-08-05T18:30:47.168754Z","steps":["trace[41647601] 'process raft request' (duration: 113.668914ms)"],"step_count":1}
{"level":"info","ts":"2024-08-05T18:30:48.689648Z","caller":"traceutil/trace.go:171","msg":"trace[1645074859] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"116.176308ms","start":"2024-08-05T18:30:48.573455Z","end":"2024-08-05T18:30:48.689632Z","steps":["trace[1645074859] 'process raft request' (duration: 115.860705ms)"],"step_count":1}
{"level":"info","ts":"2024-08-05T18:30:51.711605Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-08-05T18:30:51.712165Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"newest-cni-006868","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
{"level":"warn","ts":"2024-08-05T18:30:51.723044Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-05T18:30:51.723714Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-05T18:30:51.804512Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-05T18:30:51.804581Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
{"level":"info","ts":"2024-08-05T18:30:51.806942Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"10fb7b0a157fc334","current-leader-member-id":"10fb7b0a157fc334"}
{"level":"info","ts":"2024-08-05T18:30:51.819534Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.154:2380"}
{"level":"info","ts":"2024-08-05T18:30:51.820006Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.154:2380"}
{"level":"info","ts":"2024-08-05T18:30:51.820029Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"newest-cni-006868","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
==> etcd [22c23d2af5ee] <==
{"level":"info","ts":"2024-08-05T18:31:40.074575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T18:31:40.074666Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T18:31:40.074693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T18:31:40.070402Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:31:40.078820Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-08-05T18:31:40.079024Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"10fb7b0a157fc334","initial-advertise-peer-urls":["https://192.168.39.154:2380"],"listen-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-08-05T18:31:40.079045Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-08-05T18:31:40.079132Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.154:2380"}
{"level":"info","ts":"2024-08-05T18:31:40.079140Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.154:2380"}
{"level":"info","ts":"2024-08-05T18:31:40.701389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 is starting a new election at term 2"}
{"level":"info","ts":"2024-08-05T18:31:40.701441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became pre-candidate at term 2"}
{"level":"info","ts":"2024-08-05T18:31:40.701469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgPreVoteResp from 10fb7b0a157fc334 at term 2"}
{"level":"info","ts":"2024-08-05T18:31:40.701680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became candidate at term 3"}
{"level":"info","ts":"2024-08-05T18:31:40.701907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgVoteResp from 10fb7b0a157fc334 at term 3"}
{"level":"info","ts":"2024-08-05T18:31:40.702116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became leader at term 3"}
{"level":"info","ts":"2024-08-05T18:31:40.702159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 10fb7b0a157fc334 elected leader 10fb7b0a157fc334 at term 3"}
{"level":"info","ts":"2024-08-05T18:31:40.709437Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:newest-cni-006868 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-05T18:31:40.709878Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T18:31:40.711343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T18:31:40.722822Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:31:40.727478Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-05T18:31:40.733729Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-05T18:31:40.733767Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-05T18:31:40.743490Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:31:40.770023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
==> kernel <==
18:32:23 up 1 min, 0 users, load average: 0.45, 0.16, 0.05
Linux newest-cni-006868 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [17fd99f38c2d] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0805 18:31:42.668066 1 cache.go:39] Caches are synced for autoregister controller
E0805 18:31:42.688241 1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
I0805 18:31:43.409684 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0805 18:31:43.643016 1 handler_proxy.go:99] no RequestInfo found in the context
E0805 18:31:43.643328 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0805 18:31:43.643507 1 handler_proxy.go:99] no RequestInfo found in the context
E0805 18:31:43.643651 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0805 18:31:43.644742 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0805 18:31:43.644923 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0805 18:31:44.252790 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0805 18:31:44.265190 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0805 18:31:44.309813 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0805 18:31:44.343548 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0805 18:31:44.353696 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0805 18:31:46.266339 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0805 18:31:46.306975 1 controller.go:615] quota admission added evaluator for: endpoints
I0805 18:31:46.740167 1 controller.go:615] quota admission added evaluator for: namespaces
I0805 18:31:46.812115 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0805 18:31:47.152316 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.12.22"}
I0805 18:31:47.176437 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.231.199"}
==> kube-apiserver [e572d9a1938b] <==
W0805 18:31:01.075580 1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.084950 1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.085045 1 logging.go:55] [core] [Channel #15 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.156038 1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.182245 1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.183528 1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.209097 1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.327784 1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.356732 1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.377249 1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.416117 1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.429197 1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.475916 1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.479447 1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.507049 1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.516788 1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.597445 1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.609387 1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.671732 1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.673076 1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.673405 1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.677756 1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.686395 1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.764038 1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.820273 1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
==> kube-controller-manager [240e0b6feefa] <==
I0805 18:31:46.911175 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="21.662141ms"
E0805 18:31:46.912060 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b\" failed with pods \"dashboard-metrics-scraper-7c96f5b85b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0805 18:31:46.924868 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="19.356907ms"
E0805 18:31:46.925076 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0805 18:31:46.945102 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.742578ms"
E0805 18:31:46.945171 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0805 18:31:46.945324 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="31.535105ms"
E0805 18:31:46.945339 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b\" failed with pods \"dashboard-metrics-scraper-7c96f5b85b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0805 18:31:47.006119 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="58.418274ms"
I0805 18:31:47.029147 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="78.28612ms"
I0805 18:31:47.067370 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="37.818476ms"
I0805 18:31:47.104788 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="98.626819ms"
I0805 18:31:47.121483 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="53.854498ms"
I0805 18:31:47.121731 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="108.31µs"
I0805 18:31:47.131587 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="26.676796ms"
I0805 18:31:47.131940 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="166.814µs"
I0805 18:31:48.041229 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="75.285µs"
I0805 18:32:20.675490 1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
E0805 18:32:20.775175 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0805 18:32:20.781815 1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
I0805 18:32:21.471236 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="newest-cni-006868"
I0805 18:32:22.350164 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="119.715µs"
I0805 18:32:22.394168 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="184.907µs"
I0805 18:32:23.511822 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="77.824µs"
I0805 18:32:23.519431 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="119.488µs"
==> kube-controller-manager [76015da0e4b2] <==
I0805 18:30:40.945240 1 shared_informer.go:320] Caches are synced for disruption
I0805 18:30:40.951946 1 shared_informer.go:320] Caches are synced for endpoint
I0805 18:30:40.997020 1 shared_informer.go:320] Caches are synced for attach detach
I0805 18:30:40.997404 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0805 18:30:41.062245 1 shared_informer.go:320] Caches are synced for resource quota
I0805 18:30:41.067922 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="newest-cni-006868"
I0805 18:30:41.073917 1 shared_informer.go:320] Caches are synced for resource quota
I0805 18:30:41.512097 1 shared_informer.go:320] Caches are synced for garbage collector
I0805 18:30:41.552772 1 shared_informer.go:320] Caches are synced for garbage collector
I0805 18:30:41.552804 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0805 18:30:41.673719 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="newest-cni-006868"
I0805 18:30:42.055293 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="332.702089ms"
I0805 18:30:42.074793 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="19.394848ms"
I0805 18:30:42.110104 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="35.257114ms"
I0805 18:30:42.111229 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="162.595µs"
I0805 18:30:42.723826 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.994049ms"
I0805 18:30:42.739123 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="15.219438ms"
I0805 18:30:42.742328 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="72.086µs"
I0805 18:30:44.207109 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="91.446µs"
I0805 18:30:44.262155 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="73.11µs"
I0805 18:30:47.272673 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="newest-cni-006868"
I0805 18:30:51.106254 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="51.193829ms"
I0805 18:30:51.129411 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="20.338118ms"
I0805 18:30:51.132407 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="195.585µs"
I0805 18:30:51.151035 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="93.479µs"
==> kube-proxy [034b8846cf12] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0805 18:30:43.116342 1 proxier.go:734] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0805 18:30:43.146865 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
E0805 18:30:43.146945 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0805 18:30:43.214754 1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
I0805 18:30:43.214803 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0805 18:30:43.214831 1 server_linux.go:169] "Using iptables Proxier"
I0805 18:30:43.217168 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0805 18:30:43.217516 1 server.go:483] "Version info" version="v1.31.0-rc.0"
I0805 18:30:43.217548 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0805 18:30:43.220246 1 config.go:197] "Starting service config controller"
I0805 18:30:43.220289 1 shared_informer.go:313] Waiting for caches to sync for service config
I0805 18:30:43.220314 1 config.go:104] "Starting endpoint slice config controller"
I0805 18:30:43.220330 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0805 18:30:43.220793 1 config.go:326] "Starting node config controller"
I0805 18:30:43.220819 1 shared_informer.go:313] Waiting for caches to sync for node config
I0805 18:30:43.322698 1 shared_informer.go:320] Caches are synced for node config
I0805 18:30:43.322740 1 shared_informer.go:320] Caches are synced for service config
I0805 18:30:43.322848 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-proxy [4e96aea33e5f] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0805 18:31:43.684994 1 proxier.go:734] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0805 18:31:43.704492 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
E0805 18:31:43.704787 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0805 18:31:43.745220 1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
I0805 18:31:43.745319 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0805 18:31:43.745387 1 server_linux.go:169] "Using iptables Proxier"
I0805 18:31:43.748618 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0805 18:31:43.749596 1 server.go:483] "Version info" version="v1.31.0-rc.0"
I0805 18:31:43.749645 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0805 18:31:43.755467 1 config.go:326] "Starting node config controller"
I0805 18:31:43.755582 1 shared_informer.go:313] Waiting for caches to sync for node config
I0805 18:31:43.755962 1 config.go:197] "Starting service config controller"
I0805 18:31:43.756098 1 shared_informer.go:313] Waiting for caches to sync for service config
I0805 18:31:43.756154 1 config.go:104] "Starting endpoint slice config controller"
I0805 18:31:43.756225 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0805 18:31:43.855882 1 shared_informer.go:320] Caches are synced for node config
I0805 18:31:43.857000 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0805 18:31:43.857094 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [610674a9184a] <==
W0805 18:31:40.015321 1 feature_gate.go:354] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
W0805 18:31:40.016173 1 feature_gate.go:354] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
I0805 18:31:40.730522 1 serving.go:386] Generated self-signed cert in-memory
W0805 18:31:42.454748 1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0805 18:31:42.454802 1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0805 18:31:42.454813 1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
W0805 18:31:42.454822 1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0805 18:31:42.609530 1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
I0805 18:31:42.609577 1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0805 18:31:42.629598 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0805 18:31:42.629824 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0805 18:31:42.632243 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0805 18:31:42.633445 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 18:31:42.734203 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [ada8531e09d7] <==
W0805 18:30:35.248716 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0805 18:30:35.248782 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.250286 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0805 18:30:35.250341 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.303524 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0805 18:30:35.303589 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.367492 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0805 18:30:35.369517 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.388498 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0805 18:30:35.388837 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0805 18:30:35.410003 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0805 18:30:35.410361 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.446941 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0805 18:30:35.447251 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.486821 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0805 18:30:35.488405 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.553386 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0805 18:30:35.553770 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.571792 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0805 18:30:35.571860 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0805 18:30:37.468628 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 18:30:51.802642 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I0805 18:30:51.802765 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I0805 18:30:51.803959 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0805 18:30:51.809039 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Aug 05 18:32:21 newest-cni-006868 kubelet[2742]: I0805 18:32:21.558383 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b4e9b85518de368a90a4e6d2056bba469c91b85de3210dcf44640a7b013ac3"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.111333 2742 apiserver.go:52] "Watching apiserver"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.160408 2742 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181350 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7569998c-3a39-42a8-ab1d-e146b5179424-lib-modules\") pod \"kube-proxy-xqx9t\" (UID: \"7569998c-3a39-42a8-ab1d-e146b5179424\") " pod="kube-system/kube-proxy-xqx9t"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181451 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f8983c9e-ebbc-44da-bccc-cee486a01c95-tmp\") pod \"storage-provisioner\" (UID: \"f8983c9e-ebbc-44da-bccc-cee486a01c95\") " pod="kube-system/storage-provisioner"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181522 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wr6\" (UniqueName: \"kubernetes.io/projected/046c860c-41bb-4461-877e-7193f53258f3-kube-api-access-74wr6\") pod \"kubernetes-dashboard-695b96c756-9b4fb\" (UID: \"046c860c-41bb-4461-877e-7193f53258f3\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-9b4fb"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181557 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d5bc\" (UniqueName: \"kubernetes.io/projected/17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b-kube-api-access-2d5bc\") pod \"dashboard-metrics-scraper-7c96f5b85b-98qnz\" (UID: \"17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-98qnz"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181628 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7569998c-3a39-42a8-ab1d-e146b5179424-xtables-lock\") pod \"kube-proxy-xqx9t\" (UID: \"7569998c-3a39-42a8-ab1d-e146b5179424\") " pod="kube-system/kube-proxy-xqx9t"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.282122 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvhqj\" (UniqueName: \"kubernetes.io/projected/864943b9-5315-452b-a31a-85db981929ed-kube-api-access-fvhqj\") pod \"864943b9-5315-452b-a31a-85db981929ed\" (UID: \"864943b9-5315-452b-a31a-85db981929ed\") "
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.282212 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/864943b9-5315-452b-a31a-85db981929ed-config-volume\") pod \"864943b9-5315-452b-a31a-85db981929ed\" (UID: \"864943b9-5315-452b-a31a-85db981929ed\") "
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.283800 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864943b9-5315-452b-a31a-85db981929ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "864943b9-5315-452b-a31a-85db981929ed" (UID: "864943b9-5315-452b-a31a-85db981929ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.288017 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864943b9-5315-452b-a31a-85db981929ed-kube-api-access-fvhqj" (OuterVolumeSpecName: "kube-api-access-fvhqj") pod "864943b9-5315-452b-a31a-85db981929ed" (UID: "864943b9-5315-452b-a31a-85db981929ed"). InnerVolumeSpecName "kube-api-access-fvhqj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.306752 2742 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.383355 2742 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/864943b9-5315-452b-a31a-85db981929ed-config-volume\") on node \"newest-cni-006868\" DevicePath \"\""
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.383433 2742 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fvhqj\" (UniqueName: \"kubernetes.io/projected/864943b9-5315-452b-a31a-85db981929ed-kube-api-access-fvhqj\") on node \"newest-cni-006868\" DevicePath \"\""
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.418807 2742 scope.go:117] "RemoveContainer" containerID="31b584e307bce12b7f3379ec6ac16f8b6d6c6252c94a3f4120b4b6999613ffca"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: I0805 18:32:23.418720 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e582e7bc69e8c7c0a8222f2d32b06a51d1eb6d61397834c824ae362f5c52301f"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: I0805 18:32:23.483764 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2006920987ae72bb5ba3e59bef4f6b30b410d9e619584f9e63ecce8317a134a4"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: E0805 18:32:23.505014 2742 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-newest-cni-006868\" already exists" pod="kube-system/kube-controller-manager-newest-cni-006868"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: E0805 18:32:23.508551 2742 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-newest-cni-006868\" already exists" pod="kube-system/etcd-newest-cni-006868"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: E0805 18:32:23.511900 2742 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-newest-cni-006868\" already exists" pod="kube-system/kube-scheduler-newest-cni-006868"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.033339 2742 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.033435 2742 kuberuntime_image.go:55] "Failed to pull image" err="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.034000 2742 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2d5bc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,Peri
odSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-7c96f5b85b-98qnz_kubernetes-dashboard(17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b): ErrImagePull: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upg
rade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.035234 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-98qnz" podUID="17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b"
==> storage-provisioner [31b584e307bc] <==
I0805 18:31:43.406482 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0805 18:32:20.635481 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
==> storage-provisioner [4f78f2304a62] <==
I0805 18:32:22.788182 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0805 18:32:22.830507 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0805 18:32:22.831349 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006868 -n newest-cni-006868
helpers_test.go:261: (dbg) Run: kubectl --context newest-cni-006868 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-nbp4v dashboard-metrics-scraper-7c96f5b85b-98qnz kubernetes-dashboard-695b96c756-9b4fb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context newest-cni-006868 describe pod metrics-server-6867b74b74-nbp4v dashboard-metrics-scraper-7c96f5b85b-98qnz kubernetes-dashboard-695b96c756-9b4fb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context newest-cni-006868 describe pod metrics-server-6867b74b74-nbp4v dashboard-metrics-scraper-7c96f5b85b-98qnz kubernetes-dashboard-695b96c756-9b4fb: exit status 1 (75.167445ms)
** stderr **
Error from server (NotFound): pods "metrics-server-6867b74b74-nbp4v" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-7c96f5b85b-98qnz" not found
Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-9b4fb" not found
** /stderr **
helpers_test.go:279: kubectl --context newest-cni-006868 describe pod metrics-server-6867b74b74-nbp4v dashboard-metrics-scraper-7c96f5b85b-98qnz kubernetes-dashboard-695b96c756-9b4fb: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-006868 -n newest-cni-006868
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p newest-cni-006868 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-006868 logs -n 25: (1.599308662s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| ssh | -p kubenet-837376 sudo | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p kubenet-837376 sudo | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:29 UTC |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p kubenet-837376 sudo find | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:29 UTC |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p kubenet-837376 sudo crio | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:29 UTC |
| | config | | | | | |
| delete | -p kubenet-837376 | kubenet-837376 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:29 UTC |
| start | -p newest-cni-006868 --memory=2200 --alsologtostderr | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:29 UTC | 05 Aug 24 18:30 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --kubernetes-version=v1.31.0-rc.0 | | | | | |
| addons | enable metrics-server -p no-preload-712347 | no-preload-712347 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:30 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-712347 | no-preload-712347 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:30 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-712347 | no-preload-712347 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:30 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-712347 | no-preload-712347 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | |
| | --memory=2200 --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --kubernetes-version=v1.31.0-rc.0 | | | | | |
| addons | enable metrics-server -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:30 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:30 UTC | 05 Aug 24 18:31 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable metrics-server -p old-k8s-version-336753 | old-k8s-version-336753 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-336753 | old-k8s-version-336753 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-006868 --memory=2200 --alsologtostderr | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --feature-gates ServerSideApply=true | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --kubernetes-version=v1.31.0-rc.0 | | | | | |
| addons | enable metrics-server -p default-k8s-diff-port-466451 | default-k8s-diff-port-466451 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | default-k8s-diff-port-466451 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | default-k8s-diff-port-466451 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-336753 | old-k8s-version-336753 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-336753 | old-k8s-version-336753 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-466451 | default-k8s-diff-port-466451 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-466451 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | |
| | default-k8s-diff-port-466451 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=kvm2 | | | | | |
| | --kubernetes-version=v1.30.3 | | | | | |
| image | newest-cni-006868 image list | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:31 UTC | 05 Aug 24 18:31 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-006868 | newest-cni-006868 | jenkins | v1.33.1 | 05 Aug 24 18:32 UTC | 05 Aug 24 18:32 UTC |
| | --alsologtostderr -v=1 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/05 18:31:28
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.22.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0805 18:31:28.038157 69364 out.go:291] Setting OutFile to fd 1 ...
I0805 18:31:28.038253 69364 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 18:31:28.038260 69364 out.go:304] Setting ErrFile to fd 2...
I0805 18:31:28.038264 69364 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 18:31:28.038419 69364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19374-5415/.minikube/bin
I0805 18:31:28.038925 69364 out.go:298] Setting JSON to false
I0805 18:31:28.039800 69364 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4439,"bootTime":1722878249,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0805 18:31:28.039856 69364 start.go:139] virtualization: kvm guest
I0805 18:31:28.042022 69364 out.go:177] * [default-k8s-diff-port-466451] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0805 18:31:28.043520 69364 out.go:177] - MINIKUBE_LOCATION=19374
I0805 18:31:28.043534 69364 notify.go:220] Checking for updates...
I0805 18:31:28.046016 69364 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0805 18:31:28.047213 69364 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:31:28.048409 69364 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19374-5415/.minikube
I0805 18:31:28.049787 69364 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0805 18:31:28.051156 69364 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0805 18:31:28.052751 69364 config.go:182] Loaded profile config "default-k8s-diff-port-466451": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 18:31:28.053184 69364 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:28.053266 69364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:28.068452 69364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39137
I0805 18:31:28.068858 69364 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:28.069513 69364 main.go:141] libmachine: Using API Version 1
I0805 18:31:28.069543 69364 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:28.069905 69364 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:28.070126 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:31:28.070409 69364 driver.go:392] Setting default libvirt URI to qemu:///system
I0805 18:31:28.070823 69364 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:28.070866 69364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:28.085450 69364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
I0805 18:31:28.085866 69364 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:28.086295 69364 main.go:141] libmachine: Using API Version 1
I0805 18:31:28.086316 69364 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:28.086606 69364 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:28.086798 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:31:28.122706 69364 out.go:177] * Using the kvm2 driver based on existing profile
I0805 18:31:28.124009 69364 start.go:297] selected driver: kvm2
I0805 18:31:28.124026 69364 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-466451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-466451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.196 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] L
istenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:31:28.124158 69364 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0805 18:31:28.125122 69364 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 18:31:28.125213 69364 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19374-5415/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0805 18:31:28.141195 69364 install.go:137] /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
I0805 18:31:28.141617 69364 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0805 18:31:28.141683 69364 cni.go:84] Creating CNI manager for ""
I0805 18:31:28.141705 69364 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 18:31:28.141780 69364 start.go:340] cluster config:
{Name:default-k8s-diff-port-466451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-466451 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.196 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:31:28.141909 69364 iso.go:125] acquiring lock: {Name:mkad4f004e90cc668f8018dec3bb331fe9a9476c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 18:31:28.143783 69364 out.go:177] * Starting "default-k8s-diff-port-466451" primary control-plane node in "default-k8s-diff-port-466451" cluster
I0805 18:31:25.128627 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:25.129183 68956 main.go:141] libmachine: (newest-cni-006868) DBG | unable to find current IP address of domain newest-cni-006868 in network mk-newest-cni-006868
I0805 18:31:25.129212 68956 main.go:141] libmachine: (newest-cni-006868) DBG | I0805 18:31:25.129146 69008 retry.go:31] will retry after 3.693337981s: waiting for machine to come up
I0805 18:31:28.826260 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.826918 68956 main.go:141] libmachine: (newest-cni-006868) Found IP for machine: 192.168.39.154
I0805 18:31:28.826936 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has current primary IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.826942 68956 main.go:141] libmachine: (newest-cni-006868) Reserving static IP address...
I0805 18:31:28.827355 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "newest-cni-006868", mac: "52:54:00:1a:40:80", ip: "192.168.39.154"} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:28.827389 68956 main.go:141] libmachine: (newest-cni-006868) DBG | skip adding static IP to network mk-newest-cni-006868 - found existing host DHCP lease matching {name: "newest-cni-006868", mac: "52:54:00:1a:40:80", ip: "192.168.39.154"}
I0805 18:31:28.827402 68956 main.go:141] libmachine: (newest-cni-006868) Reserved static IP address: 192.168.39.154
I0805 18:31:28.827415 68956 main.go:141] libmachine: (newest-cni-006868) Waiting for SSH to be available...
I0805 18:31:28.827427 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Getting to WaitForSSH function...
I0805 18:31:28.829584 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.829917 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:28.829938 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.830019 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Using SSH client type: external
I0805 18:31:28.830060 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Using SSH private key: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa (-rw-------)
I0805 18:31:28.830099 68956 main.go:141] libmachine: (newest-cni-006868) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa -p 22] /usr/bin/ssh <nil>}
I0805 18:31:28.830126 68956 main.go:141] libmachine: (newest-cni-006868) DBG | About to run SSH command:
I0805 18:31:28.830138 68956 main.go:141] libmachine: (newest-cni-006868) DBG | exit 0
I0805 18:31:28.951524 68956 main.go:141] libmachine: (newest-cni-006868) DBG | SSH cmd err, output: <nil>:
I0805 18:31:28.951904 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetConfigRaw
I0805 18:31:28.952565 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetIP
I0805 18:31:28.955122 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.955524 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:28.955547 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.955788 68956 profile.go:143] Saving config to /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/config.json ...
I0805 18:31:28.955970 68956 machine.go:94] provisionDockerMachine start ...
I0805 18:31:28.955986 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:28.956195 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:28.958349 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.958680 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:28.958708 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:28.958835 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:28.959011 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:28.959173 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:28.959305 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:28.959469 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:28.959717 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:28.959735 68956 main.go:141] libmachine: About to run SSH command:
hostname
I0805 18:31:29.064086 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0805 18:31:29.064117 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetMachineName
I0805 18:31:29.064391 68956 buildroot.go:166] provisioning hostname "newest-cni-006868"
I0805 18:31:29.064418 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetMachineName
I0805 18:31:29.064622 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.067577 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.067960 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.067985 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.068122 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.068299 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.068484 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.068614 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.068785 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:29.068954 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:29.068966 68956 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-006868 && echo "newest-cni-006868" | sudo tee /etc/hostname
I0805 18:31:29.188912 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-006868
I0805 18:31:29.188943 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.191934 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.192363 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.192396 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.192612 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.192793 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.192972 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.193066 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.193233 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:29.193447 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:29.193474 68956 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-006868' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-006868/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-006868' | sudo tee -a /etc/hosts;
fi
fi
I0805 18:31:29.308113 68956 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0805 18:31:29.308142 68956 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19374-5415/.minikube CaCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19374-5415/.minikube}
I0805 18:31:29.308177 68956 buildroot.go:174] setting up certificates
I0805 18:31:29.308189 68956 provision.go:84] configureAuth start
I0805 18:31:29.308198 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetMachineName
I0805 18:31:29.308504 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetIP
I0805 18:31:29.311116 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.311512 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.311552 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.311671 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.313902 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.314283 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.314310 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.314447 68956 provision.go:143] copyHostCerts
I0805 18:31:29.314509 68956 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem, removing ...
I0805 18:31:29.314518 68956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem
I0805 18:31:29.314573 68956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem (1082 bytes)
I0805 18:31:29.314668 68956 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem, removing ...
I0805 18:31:29.314678 68956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem
I0805 18:31:29.314699 68956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem (1123 bytes)
I0805 18:31:29.314752 68956 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem, removing ...
I0805 18:31:29.314758 68956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem
I0805 18:31:29.314776 68956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem (1679 bytes)
I0805 18:31:29.314818 68956 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem org=jenkins.newest-cni-006868 san=[127.0.0.1 192.168.39.154 localhost minikube newest-cni-006868]
I0805 18:31:29.626177 68956 provision.go:177] copyRemoteCerts
I0805 18:31:29.626242 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0805 18:31:29.626265 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.629168 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.629519 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.629550 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.629752 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.629963 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.630115 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.630220 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:29.709270 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0805 18:31:29.732286 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0805 18:31:29.754900 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0805 18:31:29.777170 68956 provision.go:87] duration metric: took 468.966791ms to configureAuth
I0805 18:31:29.777209 68956 buildroot.go:189] setting minikube options for container-runtime
I0805 18:31:29.777423 68956 config.go:182] Loaded profile config "newest-cni-006868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
I0805 18:31:29.777447 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:29.777699 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.780158 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.780606 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.780634 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.780739 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.781023 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.781187 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.781341 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.781480 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:29.781632 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:29.781642 68956 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0805 18:31:32.096421 69206 start.go:364] duration metric: took 16.264802166s to acquireMachinesLock for "old-k8s-version-336753"
I0805 18:31:32.096529 69206 start.go:96] Skipping create...Using existing machine configuration
I0805 18:31:32.096537 69206 fix.go:54] fixHost starting:
I0805 18:31:32.096934 69206 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:32.096975 69206 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:32.117552 69206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41391
I0805 18:31:32.118063 69206 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:32.118563 69206 main.go:141] libmachine: Using API Version 1
I0805 18:31:32.118588 69206 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:32.118901 69206 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:32.119065 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:32.119212 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetState
I0805 18:31:32.121000 69206 fix.go:112] recreateIfNeeded on old-k8s-version-336753: state=Stopped err=<nil>
I0805 18:31:32.121043 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
W0805 18:31:32.121213 69206 fix.go:138] unexpected machine state, will restart: <nil>
I0805 18:31:32.122980 69206 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-336753" ...
I0805 18:31:29.358263 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:31.358296 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:28.144980 69364 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 18:31:28.145020 69364 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
I0805 18:31:28.145029 69364 cache.go:56] Caching tarball of preloaded images
I0805 18:31:28.145125 69364 preload.go:172] Found /home/jenkins/minikube-integration/19374-5415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0805 18:31:28.145139 69364 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0805 18:31:28.145279 69364 profile.go:143] Saving config to /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/config.json ...
I0805 18:31:28.145532 69364 start.go:360] acquireMachinesLock for default-k8s-diff-port-466451: {Name:mk1b1146f745487d6dfed2753982366f4453f7d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 18:31:29.884741 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0805 18:31:29.884762 68956 buildroot.go:70] root file system type: tmpfs
I0805 18:31:29.884880 68956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0805 18:31:29.884903 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:29.887693 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.888039 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:29.888066 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:29.888204 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:29.888387 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.888681 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:29.888803 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:29.889002 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:29.889212 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:29.889297 68956 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0805 18:31:30.005557 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0805 18:31:30.005607 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:30.008585 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:30.009025 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:30.009050 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:30.009263 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:30.009483 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:30.009666 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:30.009822 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:30.010043 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:30.010254 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:30.010284 68956 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0805 18:31:31.857038 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0805 18:31:31.857069 68956 machine.go:97] duration metric: took 2.901087555s to provisionDockerMachine
I0805 18:31:31.857082 68956 start.go:293] postStartSetup for "newest-cni-006868" (driver="kvm2")
I0805 18:31:31.857095 68956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0805 18:31:31.857110 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:31.857400 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0805 18:31:31.857425 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:31.860431 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:31.860925 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:31.860951 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:31.861178 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:31.861360 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:31.861512 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:31.861684 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:31.943778 68956 ssh_runner.go:195] Run: cat /etc/os-release
I0805 18:31:31.948056 68956 info.go:137] Remote host: Buildroot 2023.02.9
I0805 18:31:31.948080 68956 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/addons for local assets ...
I0805 18:31:31.948142 68956 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/files for local assets ...
I0805 18:31:31.948260 68956 filesync.go:149] local asset: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem -> 125812.pem in /etc/ssl/certs
I0805 18:31:31.948384 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0805 18:31:31.962677 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:31:31.989252 68956 start.go:296] duration metric: took 132.156962ms for postStartSetup
I0805 18:31:31.989295 68956 fix.go:56] duration metric: took 21.476755377s for fixHost
I0805 18:31:31.989315 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:31.992010 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:31.992323 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:31.992348 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:31.992505 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:31.992771 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:31.993148 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:31.993357 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:31.993602 68956 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:31.993791 68956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.39.154 22 <nil> <nil>}
I0805 18:31:31.993805 68956 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0805 18:31:32.096264 68956 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722882692.070538782
I0805 18:31:32.096287 68956 fix.go:216] guest clock: 1722882692.070538782
I0805 18:31:32.096297 68956 fix.go:229] Guest: 2024-08-05 18:31:32.070538782 +0000 UTC Remote: 2024-08-05 18:31:31.989299358 +0000 UTC m=+27.213460656 (delta=81.239424ms)
I0805 18:31:32.096347 68956 fix.go:200] guest clock delta is within tolerance: 81.239424ms
I0805 18:31:32.096354 68956 start.go:83] releasing machines lock for "newest-cni-006868", held for 21.583845798s
I0805 18:31:32.096391 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:32.096678 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetIP
I0805 18:31:32.099510 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.099957 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:32.099985 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.100177 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:32.100746 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:32.100959 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:32.101061 68956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0805 18:31:32.101107 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:32.101196 68956 ssh_runner.go:195] Run: cat /version.json
I0805 18:31:32.101217 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:32.103924 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.104147 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.104314 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:32.104341 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.104505 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:32.104622 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:32.104653 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:32.104673 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:32.104846 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:32.104885 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:32.105020 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:32.105060 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:32.105300 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:32.105435 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:32.205355 68956 ssh_runner.go:195] Run: systemctl --version
I0805 18:31:32.211845 68956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0805 18:31:32.217396 68956 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0805 18:31:32.217487 68956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0805 18:31:32.233535 68956 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0805 18:31:32.233563 68956 start.go:495] detecting cgroup driver to use...
I0805 18:31:32.233677 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:31:32.251991 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0805 18:31:32.262815 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0805 18:31:32.273954 68956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0805 18:31:32.274024 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0805 18:31:32.285204 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:31:32.296809 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0805 18:31:32.308942 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:31:32.321376 68956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0805 18:31:32.333130 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0805 18:31:32.343947 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0805 18:31:32.354925 68956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0805 18:31:32.366413 68956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0805 18:31:32.376515 68956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0805 18:31:32.387123 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:32.505648 68956 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0805 18:31:32.530207 68956 start.go:495] detecting cgroup driver to use...
I0805 18:31:32.530279 68956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0805 18:31:32.545812 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:31:32.563975 68956 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0805 18:31:32.582644 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:31:32.600020 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:31:32.615404 68956 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0805 18:31:32.645943 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:31:32.661782 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:31:32.681377 68956 ssh_runner.go:195] Run: which cri-dockerd
I0805 18:31:32.686005 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0805 18:31:32.696284 68956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0805 18:31:32.713438 68956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0805 18:31:32.850020 68956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0805 18:31:32.992265 68956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0805 18:31:32.992412 68956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0805 18:31:33.012368 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:33.144357 68956 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 18:31:32.124304 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .Start
I0805 18:31:32.124531 69206 main.go:141] libmachine: (old-k8s-version-336753) Ensuring networks are active...
I0805 18:31:32.125343 69206 main.go:141] libmachine: (old-k8s-version-336753) Ensuring network default is active
I0805 18:31:32.125699 69206 main.go:141] libmachine: (old-k8s-version-336753) Ensuring network mk-old-k8s-version-336753 is active
I0805 18:31:32.126072 69206 main.go:141] libmachine: (old-k8s-version-336753) Getting domain xml...
I0805 18:31:32.126819 69206 main.go:141] libmachine: (old-k8s-version-336753) Creating domain...
I0805 18:31:33.414704 69206 main.go:141] libmachine: (old-k8s-version-336753) Waiting to get IP...
I0805 18:31:33.415652 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:33.416293 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:33.416386 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:33.416267 69427 retry.go:31] will retry after 255.011071ms: waiting for machine to come up
I0805 18:31:33.673148 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:33.673835 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:33.673878 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:33.673791 69427 retry.go:31] will retry after 373.631452ms: waiting for machine to come up
I0805 18:31:34.049506 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:34.049997 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:34.050029 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:34.049950 69427 retry.go:31] will retry after 392.215323ms: waiting for machine to come up
I0805 18:31:34.443438 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:34.444018 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:34.444044 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:34.443953 69427 retry.go:31] will retry after 608.331592ms: waiting for machine to come up
I0805 18:31:35.053500 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:35.054028 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:35.054051 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:35.053983 69427 retry.go:31] will retry after 716.029966ms: waiting for machine to come up
I0805 18:31:35.587036 68956 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.44264544s)
I0805 18:31:35.587118 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0805 18:31:35.601344 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 18:31:35.616637 68956 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0805 18:31:35.725430 68956 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0805 18:31:35.855686 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:35.978285 68956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0805 18:31:35.996500 68956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 18:31:36.010473 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:36.138638 68956 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0805 18:31:36.214673 68956 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0805 18:31:36.214755 68956 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0805 18:31:36.220751 68956 start.go:563] Will wait 60s for crictl version
I0805 18:31:36.220814 68956 ssh_runner.go:195] Run: which crictl
I0805 18:31:36.224676 68956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0805 18:31:36.260941 68956 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0805 18:31:36.261031 68956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:31:36.285143 68956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:31:36.307748 68956 out.go:204] * Preparing Kubernetes v1.31.0-rc.0 on Docker 27.1.1 ...
I0805 18:31:36.307789 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetIP
I0805 18:31:36.310661 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:36.311042 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:36.311070 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:36.311254 68956 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0805 18:31:36.315204 68956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:31:36.330230 68956 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I0805 18:31:33.859336 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:35.864034 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:36.331272 68956 kubeadm.go:883] updating cluster {Name:newest-cni-006868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:newest-cni-006868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0805 18:31:36.331399 68956 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
I0805 18:31:36.331484 68956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:31:36.349695 68956 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-scheduler:v1.31.0-rc.0
registry.k8s.io/kube-apiserver:v1.31.0-rc.0
registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
registry.k8s.io/kube-proxy:v1.31.0-rc.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0805 18:31:36.349720 68956 docker.go:615] Images already preloaded, skipping extraction
I0805 18:31:36.349795 68956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:31:36.368958 68956 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.31.0-rc.0
registry.k8s.io/kube-scheduler:v1.31.0-rc.0
registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
registry.k8s.io/kube-proxy:v1.31.0-rc.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0805 18:31:36.368986 68956 cache_images.go:84] Images are preloaded, skipping loading
I0805 18:31:36.368998 68956 kubeadm.go:934] updating node { 192.168.39.154 8443 v1.31.0-rc.0 docker true true} ...
I0805 18:31:36.369130 68956 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-006868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
[Install]
config:
{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-006868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0805 18:31:36.369203 68956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0805 18:31:36.421222 68956 cni.go:84] Creating CNI manager for ""
I0805 18:31:36.421263 68956 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 18:31:36.421280 68956 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
I0805 18:31:36.421311 68956 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-006868 NodeName:newest-cni-006868 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0805 18:31:36.421495 68956 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.154
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "newest-cni-006868"
kubeletExtraArgs:
node-ip: 192.168.39.154
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
feature-gates: "ServerSideApply=true"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
feature-gates: "ServerSideApply=true"
leader-elect: "false"
scheduler:
extraArgs:
feature-gates: "ServerSideApply=true"
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0-rc.0
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0805 18:31:36.421570 68956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
I0805 18:31:36.431938 68956 binaries.go:44] Found k8s binaries, skipping transfer
I0805 18:31:36.432001 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0805 18:31:36.441320 68956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
I0805 18:31:36.460733 68956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
I0805 18:31:36.480989 68956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
I0805 18:31:36.500828 68956 ssh_runner.go:195] Run: grep 192.168.39.154 control-plane.minikube.internal$ /etc/hosts
I0805 18:31:36.505320 68956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:31:36.517494 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:36.635749 68956 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 18:31:36.656881 68956 certs.go:68] Setting up /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868 for IP: 192.168.39.154
I0805 18:31:36.656902 68956 certs.go:194] generating shared ca certs ...
I0805 18:31:36.656924 68956 certs.go:226] acquiring lock for ca certs: {Name:mkd5950c6b2de2854a748470350a45601540dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:31:36.657099 68956 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key
I0805 18:31:36.657177 68956 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key
I0805 18:31:36.657192 68956 certs.go:256] generating profile certs ...
I0805 18:31:36.657305 68956 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/client.key
I0805 18:31:36.657390 68956 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/apiserver.key.b83b5c3d
I0805 18:31:36.657459 68956 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/proxy-client.key
I0805 18:31:36.657620 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem (1338 bytes)
W0805 18:31:36.657667 68956 certs.go:480] ignoring /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581_empty.pem, impossibly tiny 0 bytes
I0805 18:31:36.657681 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem (1679 bytes)
I0805 18:31:36.657716 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem (1082 bytes)
I0805 18:31:36.657761 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem (1123 bytes)
I0805 18:31:36.657794 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem (1679 bytes)
I0805 18:31:36.657870 68956 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:31:36.658661 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0805 18:31:36.688339 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0805 18:31:36.717836 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0805 18:31:36.746025 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0805 18:31:36.777296 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0805 18:31:36.806710 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0805 18:31:36.840217 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0805 18:31:36.869941 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/newest-cni-006868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0805 18:31:36.895297 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0805 18:31:36.917901 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem --> /usr/share/ca-certificates/12581.pem (1338 bytes)
I0805 18:31:36.945953 68956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /usr/share/ca-certificates/125812.pem (1708 bytes)
I0805 18:31:36.970496 68956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0805 18:31:36.987008 68956 ssh_runner.go:195] Run: openssl version
I0805 18:31:36.992758 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0805 18:31:37.003532 68956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0805 18:31:37.007947 68956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 5 17:27 /usr/share/ca-certificates/minikubeCA.pem
I0805 18:31:37.008017 68956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0805 18:31:37.013763 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0805 18:31:37.024449 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12581.pem && ln -fs /usr/share/ca-certificates/12581.pem /etc/ssl/certs/12581.pem"
I0805 18:31:37.035265 68956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12581.pem
I0805 18:31:37.040098 68956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 5 17:34 /usr/share/ca-certificates/12581.pem
I0805 18:31:37.040169 68956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12581.pem
I0805 18:31:37.045922 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12581.pem /etc/ssl/certs/51391683.0"
I0805 18:31:37.056956 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125812.pem && ln -fs /usr/share/ca-certificates/125812.pem /etc/ssl/certs/125812.pem"
I0805 18:31:37.067712 68956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125812.pem
I0805 18:31:37.072276 68956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 5 17:34 /usr/share/ca-certificates/125812.pem
I0805 18:31:37.072338 68956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125812.pem
I0805 18:31:37.078029 68956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125812.pem /etc/ssl/certs/3ec20f2e.0"
I0805 18:31:37.088772 68956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0805 18:31:37.093194 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0805 18:31:37.098918 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0805 18:31:37.104393 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0805 18:31:37.110913 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0805 18:31:37.116610 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0805 18:31:37.122812 68956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0805 18:31:37.128599 68956 kubeadm.go:392] StartCluster: {Name:newest-cni-006868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:newest-cni-006868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:31:37.128719 68956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:31:37.146660 68956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0805 18:31:37.157633 68956 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0805 18:31:37.157659 68956 kubeadm.go:593] restartPrimaryControlPlane start ...
I0805 18:31:37.157710 68956 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0805 18:31:37.170098 68956 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0805 18:31:37.170797 68956 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-006868" does not appear in /home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:31:37.171095 68956 kubeconfig.go:62] /home/jenkins/minikube-integration/19374-5415/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-006868" cluster setting kubeconfig missing "newest-cni-006868" context setting]
I0805 18:31:37.171791 68956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19374-5415/kubeconfig: {Name:mk625b9ea6f09360b6a4e9f50277b2927e24bcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:31:37.173305 68956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0805 18:31:37.183424 68956 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.154
I0805 18:31:37.183465 68956 kubeadm.go:1160] stopping kube-system containers ...
I0805 18:31:37.183523 68956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:31:37.204002 68956 docker.go:483] Stopping containers: [29e5fb15e683 ee3ce586dc31 863f972a9657 5134dfdc0d50 fbf011536a75 034b8846cf12 21fb70992ec3 cac43aae145f c9b4e9b85518 067a823d9b94 e572d9a1938b 76015da0e4b2 ada8531e09d7 49ed690f0f0c 50297a33ca66 f4c413a7965b 37ceea586604]
I0805 18:31:37.204086 68956 ssh_runner.go:195] Run: docker stop 29e5fb15e683 ee3ce586dc31 863f972a9657 5134dfdc0d50 fbf011536a75 034b8846cf12 21fb70992ec3 cac43aae145f c9b4e9b85518 067a823d9b94 e572d9a1938b 76015da0e4b2 ada8531e09d7 49ed690f0f0c 50297a33ca66 f4c413a7965b 37ceea586604
I0805 18:31:37.225342 68956 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0805 18:31:37.241886 68956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0805 18:31:37.251158 68956 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0805 18:31:37.251181 68956 kubeadm.go:157] found existing configuration files:
I0805 18:31:37.251227 68956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0805 18:31:37.260004 68956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0805 18:31:37.260077 68956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0805 18:31:37.269615 68956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0805 18:31:37.279028 68956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0805 18:31:37.279103 68956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0805 18:31:37.288499 68956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0805 18:31:37.297538 68956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0805 18:31:37.297594 68956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0805 18:31:37.307369 68956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0805 18:31:37.316530 68956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0805 18:31:37.316595 68956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0805 18:31:37.326160 68956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0805 18:31:37.335282 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:37.468295 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:38.183278 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:38.428796 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:38.497143 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:38.585468 68956 api_server.go:52] waiting for apiserver process to appear ...
I0805 18:31:38.585558 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:39.085840 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:39.585942 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:35.771516 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:35.772025 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:35.772054 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:35.771977 69427 retry.go:31] will retry after 929.312732ms: waiting for machine to come up
I0805 18:31:36.703090 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:36.703733 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:36.703767 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:36.703685 69427 retry.go:31] will retry after 926.726893ms: waiting for machine to come up
I0805 18:31:37.632365 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:37.632942 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:37.632964 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:37.632900 69427 retry.go:31] will retry after 1.291343117s: waiting for machine to come up
I0805 18:31:38.926669 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:38.927129 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:38.927149 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:38.927101 69427 retry.go:31] will retry after 1.830445372s: waiting for machine to come up
I0805 18:31:38.358645 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:40.359280 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:42.359800 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:40.086662 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:40.105979 68956 api_server.go:72] duration metric: took 1.520510323s to wait for apiserver process to appear ...
I0805 18:31:40.106008 68956 api_server.go:88] waiting for apiserver healthz status ...
I0805 18:31:40.106050 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:42.459584 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0805 18:31:42.459614 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0805 18:31:42.459638 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:42.620386 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 18:31:42.620425 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 18:31:42.620442 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:42.645980 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 18:31:42.646012 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 18:31:43.106199 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:43.120222 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 18:31:43.120246 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 18:31:43.606910 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:43.612862 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 18:31:43.612894 68956 api_server.go:103] status: https://192.168.39.154:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 18:31:44.106209 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:44.111907 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
ok
I0805 18:31:44.119294 68956 api_server.go:141] control plane version: v1.31.0-rc.0
I0805 18:31:44.119320 68956 api_server.go:131] duration metric: took 4.01330617s to wait for apiserver health ...
I0805 18:31:44.119328 68956 cni.go:84] Creating CNI manager for ""
I0805 18:31:44.119339 68956 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 18:31:44.121423 68956 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0805 18:31:44.122803 68956 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0805 18:31:44.133137 68956 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0805 18:31:44.150947 68956 system_pods.go:43] waiting for kube-system pods to appear ...
I0805 18:31:44.163475 68956 system_pods.go:59] 9 kube-system pods found
I0805 18:31:44.163513 68956 system_pods.go:61] "coredns-6f6b679f8f-88m8m" [864943b9-5315-452b-a31a-85db981929ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 18:31:44.163521 68956 system_pods.go:61] "coredns-6f6b679f8f-8lr5f" [c562efab-4c2c-415a-908a-1a8dbb1c8070] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 18:31:44.163529 68956 system_pods.go:61] "etcd-newest-cni-006868" [488a02a4-833a-4a75-8d8d-cdc43de28b87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0805 18:31:44.163535 68956 system_pods.go:61] "kube-apiserver-newest-cni-006868" [967e63e5-3b01-4e52-877d-1ae933940f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0805 18:31:44.163541 68956 system_pods.go:61] "kube-controller-manager-newest-cni-006868" [a86570e6-192e-4833-bda2-c00d1d0c1ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0805 18:31:44.163545 68956 system_pods.go:61] "kube-proxy-xqx9t" [7569998c-3a39-42a8-ab1d-e146b5179424] Running
I0805 18:31:44.163550 68956 system_pods.go:61] "kube-scheduler-newest-cni-006868" [c74a0c10-6d7b-4e99-bcbd-a7a603c0dc4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0805 18:31:44.163555 68956 system_pods.go:61] "metrics-server-6867b74b74-nbp4v" [6ed58f0d-6054-473c-971e-c2269a8c059b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0805 18:31:44.163563 68956 system_pods.go:61] "storage-provisioner" [f8983c9e-ebbc-44da-bccc-cee486a01c95] Running
I0805 18:31:44.163569 68956 system_pods.go:74] duration metric: took 12.604099ms to wait for pod list to return data ...
I0805 18:31:44.163578 68956 node_conditions.go:102] verifying NodePressure condition ...
I0805 18:31:44.167998 68956 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0805 18:31:44.168022 68956 node_conditions.go:123] node cpu capacity is 2
I0805 18:31:44.168033 68956 node_conditions.go:105] duration metric: took 4.451013ms to run NodePressure ...
I0805 18:31:44.168050 68956 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:31:44.424930 68956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0805 18:31:44.436629 68956 ops.go:34] apiserver oom_adj: -16
I0805 18:31:44.436658 68956 kubeadm.go:597] duration metric: took 7.278991861s to restartPrimaryControlPlane
I0805 18:31:44.436669 68956 kubeadm.go:394] duration metric: took 7.308078248s to StartCluster
I0805 18:31:44.436687 68956 settings.go:142] acquiring lock: {Name:mka55bc46b2003e604f2001e767e118228a1c7ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:31:44.436770 68956 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:31:44.437729 68956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19374-5415/kubeconfig: {Name:mk625b9ea6f09360b6a4e9f50277b2927e24bcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:31:44.437989 68956 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0805 18:31:44.438046 68956 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0805 18:31:44.438120 68956 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-006868"
I0805 18:31:44.438142 68956 addons.go:69] Setting default-storageclass=true in profile "newest-cni-006868"
I0805 18:31:44.438163 68956 addons.go:69] Setting dashboard=true in profile "newest-cni-006868"
I0805 18:31:44.438185 68956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-006868"
I0805 18:31:44.438195 68956 addons.go:234] Setting addon dashboard=true in "newest-cni-006868"
W0805 18:31:44.438203 68956 addons.go:243] addon dashboard should already be in state true
I0805 18:31:44.438182 68956 addons.go:69] Setting metrics-server=true in profile "newest-cni-006868"
I0805 18:31:44.438277 68956 addons.go:234] Setting addon metrics-server=true in "newest-cni-006868"
W0805 18:31:44.438297 68956 addons.go:243] addon metrics-server should already be in state true
I0805 18:31:44.438229 68956 config.go:182] Loaded profile config "newest-cni-006868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
I0805 18:31:44.438355 68956 host.go:66] Checking if "newest-cni-006868" exists ...
I0805 18:31:44.438234 68956 host.go:66] Checking if "newest-cni-006868" exists ...
I0805 18:31:44.438490 68956 cache.go:107] acquiring lock: {Name:mk08cdb5b35c2969a80271638168f940d6cf8598 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 18:31:44.438155 68956 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-006868"
W0805 18:31:44.438551 68956 addons.go:243] addon storage-provisioner should already be in state true
I0805 18:31:44.438574 68956 cache.go:115] /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0805 18:31:44.438580 68956 host.go:66] Checking if "newest-cni-006868" exists ...
I0805 18:31:44.438589 68956 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 135.701µs
I0805 18:31:44.438606 68956 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0805 18:31:44.438614 68956 cache.go:87] Successfully saved all images to host disk.
I0805 18:31:44.438647 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.438702 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.438721 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.438742 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.438794 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.438819 68956 config.go:182] Loaded profile config "newest-cni-006868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
I0805 18:31:44.438880 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.438925 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.438956 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.439218 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.439249 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.440001 68956 out.go:177] * Verifying Kubernetes components...
I0805 18:31:44.441326 68956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:44.456512 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
I0805 18:31:44.456542 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
I0805 18:31:44.457322 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.457393 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.457867 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.457891 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.458016 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.458044 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.458268 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.458374 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.458447 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.458563 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
I0805 18:31:44.458689 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45745
I0805 18:31:44.458963 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.458980 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.458987 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.459361 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.459385 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.459401 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.459889 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.459908 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.459965 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.460323 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.460489 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.460555 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
I0805 18:31:44.460743 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.460877 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.460919 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.460934 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.461423 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.461444 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.461747 68956 addons.go:234] Setting addon default-storageclass=true in "newest-cni-006868"
W0805 18:31:44.461768 68956 addons.go:243] addon default-storageclass should already be in state true
I0805 18:31:44.461796 68956 host.go:66] Checking if "newest-cni-006868" exists ...
I0805 18:31:44.461919 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.462115 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.462146 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.462188 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.464332 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.464371 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.477087 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43237
I0805 18:31:44.478815 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
I0805 18:31:44.479170 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.479309 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.479875 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.479896 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.480036 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.480051 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.480425 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.480613 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.480682 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.480947 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
I0805 18:31:44.481239 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.481874 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.482424 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.482440 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.482871 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.483348 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39979
I0805 18:31:44.483546 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35729
I0805 18:31:44.483950 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.484098 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.484145 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.484211 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.484291 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.484424 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.484447 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.484847 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.484872 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.484932 68956 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0805 18:31:44.485020 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.485385 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.485507 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.485717 68956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:31:44.485743 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.485852 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.485995 68956 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0805 18:31:44.486043 68956 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:44.486099 68956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:44.486125 68956 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0805 18:31:44.486144 68956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0805 18:31:44.486160 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.487359 68956 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0805 18:31:44.488516 68956 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0805 18:31:44.488581 68956 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0805 18:31:44.488595 68956 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0805 18:31:44.488613 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.489340 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.489592 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0805 18:31:44.489609 68956 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0805 18:31:44.489626 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.490276 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.490301 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.490326 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.490346 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.490364 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.490398 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.490603 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.491270 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.491035 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.492146 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.492184 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.492380 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.492672 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.492989 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.493323 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.493384 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.493423 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.493552 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.493803 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.493829 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.493859 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.493984 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.494104 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.494143 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.494260 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.494274 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.494374 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.528998 68956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41559
I0805 18:31:44.529488 68956 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:44.530050 68956 main.go:141] libmachine: Using API Version 1
I0805 18:31:44.530069 68956 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:44.530415 68956 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:44.530609 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetState
I0805 18:31:44.532093 68956 main.go:141] libmachine: (newest-cni-006868) Calling .DriverName
I0805 18:31:44.532310 68956 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0805 18:31:44.532326 68956 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0805 18:31:44.532343 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHHostname
I0805 18:31:44.535738 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.536303 68956 main.go:141] libmachine: (newest-cni-006868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:40:80", ip: ""} in network mk-newest-cni-006868: {Iface:virbr2 ExpiryTime:2024-08-05 19:31:21 +0000 UTC Type:0 Mac:52:54:00:1a:40:80 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:newest-cni-006868 Clientid:01:52:54:00:1a:40:80}
I0805 18:31:44.536333 68956 main.go:141] libmachine: (newest-cni-006868) DBG | domain newest-cni-006868 has defined IP address 192.168.39.154 and MAC address 52:54:00:1a:40:80 in network mk-newest-cni-006868
I0805 18:31:44.536550 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHPort
I0805 18:31:44.536736 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHKeyPath
I0805 18:31:44.536908 68956 main.go:141] libmachine: (newest-cni-006868) Calling .GetSSHUsername
I0805 18:31:44.537026 68956 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/newest-cni-006868/id_rsa Username:docker}
I0805 18:31:44.718357 68956 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 18:31:44.735895 68956 api_server.go:52] waiting for apiserver process to appear ...
I0805 18:31:44.735980 68956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:31:44.754478 68956 api_server.go:72] duration metric: took 316.452691ms to wait for apiserver process to appear ...
I0805 18:31:44.754507 68956 api_server.go:88] waiting for apiserver healthz status ...
I0805 18:31:44.754526 68956 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8443/healthz ...
I0805 18:31:44.764390 68956 api_server.go:279] https://192.168.39.154:8443/healthz returned 200:
ok
I0805 18:31:44.766297 68956 api_server.go:141] control plane version: v1.31.0-rc.0
I0805 18:31:44.766325 68956 api_server.go:131] duration metric: took 11.810001ms to wait for apiserver health ...
I0805 18:31:44.766335 68956 system_pods.go:43] waiting for kube-system pods to appear ...
I0805 18:31:44.779739 68956 system_pods.go:59] 9 kube-system pods found
I0805 18:31:44.779771 68956 system_pods.go:61] "coredns-6f6b679f8f-88m8m" [864943b9-5315-452b-a31a-85db981929ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 18:31:44.779778 68956 system_pods.go:61] "coredns-6f6b679f8f-8lr5f" [c562efab-4c2c-415a-908a-1a8dbb1c8070] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 18:31:44.779785 68956 system_pods.go:61] "etcd-newest-cni-006868" [488a02a4-833a-4a75-8d8d-cdc43de28b87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0805 18:31:44.779791 68956 system_pods.go:61] "kube-apiserver-newest-cni-006868" [967e63e5-3b01-4e52-877d-1ae933940f46] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0805 18:31:44.779805 68956 system_pods.go:61] "kube-controller-manager-newest-cni-006868" [a86570e6-192e-4833-bda2-c00d1d0c1ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0805 18:31:44.779809 68956 system_pods.go:61] "kube-proxy-xqx9t" [7569998c-3a39-42a8-ab1d-e146b5179424] Running
I0805 18:31:44.779814 68956 system_pods.go:61] "kube-scheduler-newest-cni-006868" [c74a0c10-6d7b-4e99-bcbd-a7a603c0dc4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0805 18:31:44.779819 68956 system_pods.go:61] "metrics-server-6867b74b74-nbp4v" [6ed58f0d-6054-473c-971e-c2269a8c059b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0805 18:31:44.779823 68956 system_pods.go:61] "storage-provisioner" [f8983c9e-ebbc-44da-bccc-cee486a01c95] Running
I0805 18:31:44.779830 68956 system_pods.go:74] duration metric: took 13.488547ms to wait for pod list to return data ...
I0805 18:31:44.779839 68956 default_sa.go:34] waiting for default service account to be created ...
I0805 18:31:44.782766 68956 default_sa.go:45] found service account: "default"
I0805 18:31:44.782788 68956 default_sa.go:55] duration metric: took 2.943139ms for default service account to be created ...
I0805 18:31:44.782798 68956 kubeadm.go:582] duration metric: took 344.779681ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0805 18:31:44.782813 68956 node_conditions.go:102] verifying NodePressure condition ...
I0805 18:31:44.785159 68956 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0805 18:31:44.785178 68956 node_conditions.go:123] node cpu capacity is 2
I0805 18:31:44.785186 68956 node_conditions.go:105] duration metric: took 2.369979ms to run NodePressure ...
I0805 18:31:44.785197 68956 start.go:241] waiting for startup goroutines ...
I0805 18:31:40.759021 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:40.759678 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:40.759707 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:40.759641 69427 retry.go:31] will retry after 1.434861666s: waiting for machine to come up
I0805 18:31:42.196378 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:42.196942 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:42.196972 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:42.196902 69427 retry.go:31] will retry after 2.088776544s: waiting for machine to come up
I0805 18:31:44.288249 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:44.288829 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:44.288862 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:44.288782 69427 retry.go:31] will retry after 3.416549781s: waiting for machine to come up
I0805 18:31:44.820922 68956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0805 18:31:44.868548 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0805 18:31:44.868574 68956 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0805 18:31:44.902495 68956 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0805 18:31:44.902519 68956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0805 18:31:44.915197 68956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0805 18:31:44.931007 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0805 18:31:44.931032 68956 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0805 18:31:44.966198 68956 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0805 18:31:44.966223 68956 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0805 18:31:44.984059 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0805 18:31:44.984085 68956 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0805 18:31:45.023462 68956 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0805 18:31:45.023490 68956 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0805 18:31:45.073509 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0805 18:31:45.073532 68956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0805 18:31:45.122090 68956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0805 18:31:45.270164 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0805 18:31:45.270188 68956 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0805 18:31:45.377617 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0805 18:31:45.377644 68956 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0805 18:31:45.404215 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.404243 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.404363 68956 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-scheduler:v1.31.0-rc.0
registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
registry.k8s.io/kube-apiserver:v1.31.0-rc.0
registry.k8s.io/kube-proxy:v1.31.0-rc.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0805 18:31:45.404386 68956 cache_images.go:84] Images are preloaded, skipping loading
I0805 18:31:45.404398 68956 cache_images.go:262] succeeded pushing to: newest-cni-006868
I0805 18:31:45.404419 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.404430 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.404538 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.404560 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.404577 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.404586 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.404711 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.404725 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.404733 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.404745 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.404713 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:45.404862 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.404901 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.404906 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:45.404983 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:45.405018 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.405029 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.413441 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:45.413470 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:45.413761 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:45.413773 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:45.413788 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:45.451012 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0805 18:31:45.451046 68956 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0805 18:31:45.474601 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0805 18:31:45.474629 68956 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0805 18:31:45.495882 68956 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0805 18:31:45.495910 68956 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0805 18:31:45.536928 68956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0805 18:31:46.659511 68956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.744273283s)
I0805 18:31:46.659572 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:46.659587 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:46.659610 68956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.537483297s)
I0805 18:31:46.659670 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:46.659701 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:46.659924 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:46.659942 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:46.659953 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:46.659961 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:46.659960 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:46.659983 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:46.659996 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:46.660004 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:46.660282 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:46.660345 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:46.660376 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:46.660387 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:46.660397 68956 addons.go:475] Verifying addon metrics-server=true in "newest-cni-006868"
I0805 18:31:46.660441 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:46.660487 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:47.204841 68956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.667851253s)
I0805 18:31:47.204901 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:47.204915 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:47.205360 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:47.205402 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:47.205411 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:47.205420 68956 main.go:141] libmachine: Making call to close driver server
I0805 18:31:47.205428 68956 main.go:141] libmachine: (newest-cni-006868) Calling .Close
I0805 18:31:47.205746 68956 main.go:141] libmachine: (newest-cni-006868) DBG | Closing plugin on server side
I0805 18:31:47.205818 68956 main.go:141] libmachine: Successfully made call to close driver server
I0805 18:31:47.205844 68956 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 18:31:47.207425 68956 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-006868 addons enable metrics-server
I0805 18:31:47.208785 68956 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I0805 18:31:47.210024 68956 addons.go:510] duration metric: took 2.7719791s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I0805 18:31:47.210060 68956 start.go:246] waiting for cluster config update ...
I0805 18:31:47.210075 68956 start.go:255] writing updated cluster config ...
I0805 18:31:47.210373 68956 ssh_runner.go:195] Run: rm -f paused
I0805 18:31:47.257668 68956 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
I0805 18:31:47.259793 68956 out.go:177] * Done! kubectl is now configured to use "newest-cni-006868" cluster and "default" namespace by default
I0805 18:31:44.858577 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:46.858945 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:47.706302 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:47.706773 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | unable to find current IP address of domain old-k8s-version-336753 in network mk-old-k8s-version-336753
I0805 18:31:47.706823 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | I0805 18:31:47.706747 69427 retry.go:31] will retry after 4.41727256s: waiting for machine to come up
I0805 18:31:49.357761 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:51.358591 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:55.196740 69364 start.go:364] duration metric: took 27.051156361s to acquireMachinesLock for "default-k8s-diff-port-466451"
I0805 18:31:55.196792 69364 start.go:96] Skipping create...Using existing machine configuration
I0805 18:31:55.196800 69364 fix.go:54] fixHost starting:
I0805 18:31:55.197234 69364 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19374-5415/.minikube/bin/docker-machine-driver-kvm2
I0805 18:31:55.197282 69364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 18:31:55.217579 69364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
I0805 18:31:55.218027 69364 main.go:141] libmachine: () Calling .GetVersion
I0805 18:31:55.218575 69364 main.go:141] libmachine: Using API Version 1
I0805 18:31:55.218603 69364 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 18:31:55.218937 69364 main.go:141] libmachine: () Calling .GetMachineName
I0805 18:31:55.219147 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:31:55.219352 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetState
I0805 18:31:55.221258 69364 fix.go:112] recreateIfNeeded on default-k8s-diff-port-466451: state=Stopped err=<nil>
I0805 18:31:55.221301 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
W0805 18:31:55.221485 69364 fix.go:138] unexpected machine state, will restart: <nil>
I0805 18:31:55.223722 69364 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-466451" ...
I0805 18:31:52.125529 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.126018 69206 main.go:141] libmachine: (old-k8s-version-336753) Found IP for machine: 192.168.61.245
I0805 18:31:52.126038 69206 main.go:141] libmachine: (old-k8s-version-336753) Reserving static IP address...
I0805 18:31:52.126048 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has current primary IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.126471 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "old-k8s-version-336753", mac: "52:54:00:54:bf:8c", ip: "192.168.61.245"} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.126510 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | skip adding static IP to network mk-old-k8s-version-336753 - found existing host DHCP lease matching {name: "old-k8s-version-336753", mac: "52:54:00:54:bf:8c", ip: "192.168.61.245"}
I0805 18:31:52.126523 69206 main.go:141] libmachine: (old-k8s-version-336753) Reserved static IP address: 192.168.61.245
I0805 18:31:52.126539 69206 main.go:141] libmachine: (old-k8s-version-336753) Waiting for SSH to be available...
I0805 18:31:52.126564 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | Getting to WaitForSSH function...
I0805 18:31:52.128944 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.129268 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.129299 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.129514 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | Using SSH client type: external
I0805 18:31:52.129531 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | Using SSH private key: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa (-rw-------)
I0805 18:31:52.129573 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa -p 22] /usr/bin/ssh <nil>}
I0805 18:31:52.129591 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | About to run SSH command:
I0805 18:31:52.129615 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | exit 0
I0805 18:31:52.251558 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | SSH cmd err, output: <nil>:
I0805 18:31:52.252043 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetConfigRaw
I0805 18:31:52.252697 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetIP
I0805 18:31:52.255356 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.255773 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.255799 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.256078 69206 profile.go:143] Saving config to /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/config.json ...
I0805 18:31:52.256257 69206 machine.go:94] provisionDockerMachine start ...
I0805 18:31:52.256275 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:52.256495 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.258621 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.258977 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.259003 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.259117 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.259297 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.259449 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.259605 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.259803 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:52.259994 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:52.260010 69206 main.go:141] libmachine: About to run SSH command:
hostname
I0805 18:31:52.356566 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0805 18:31:52.356598 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetMachineName
I0805 18:31:52.356850 69206 buildroot.go:166] provisioning hostname "old-k8s-version-336753"
I0805 18:31:52.356875 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetMachineName
I0805 18:31:52.357068 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.359750 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.360210 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.360252 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.360348 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.360558 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.360744 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.360925 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.361105 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:52.361260 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:52.361272 69206 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-336753 && echo "old-k8s-version-336753" | sudo tee /etc/hostname
I0805 18:31:52.475048 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-336753
I0805 18:31:52.475082 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.478157 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.478560 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.478599 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.478792 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.478997 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.479151 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.479301 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.479461 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:52.479641 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:52.479664 69206 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-336753' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-336753/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-336753' | sudo tee -a /etc/hosts;
fi
fi
I0805 18:31:52.584682 69206 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0805 18:31:52.584738 69206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19374-5415/.minikube CaCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19374-5415/.minikube}
I0805 18:31:52.584758 69206 buildroot.go:174] setting up certificates
I0805 18:31:52.584768 69206 provision.go:84] configureAuth start
I0805 18:31:52.584776 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetMachineName
I0805 18:31:52.585110 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetIP
I0805 18:31:52.587944 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.588310 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.588349 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.588500 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.591036 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.591480 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.591505 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.591728 69206 provision.go:143] copyHostCerts
I0805 18:31:52.591783 69206 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem, removing ...
I0805 18:31:52.591792 69206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem
I0805 18:31:52.591844 69206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem (1082 bytes)
I0805 18:31:52.591938 69206 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem, removing ...
I0805 18:31:52.591945 69206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem
I0805 18:31:52.591966 69206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem (1123 bytes)
I0805 18:31:52.592020 69206 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem, removing ...
I0805 18:31:52.592026 69206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem
I0805 18:31:52.592044 69206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem (1679 bytes)
I0805 18:31:52.592090 69206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-336753 san=[127.0.0.1 192.168.61.245 localhost minikube old-k8s-version-336753]
I0805 18:31:52.767859 69206 provision.go:177] copyRemoteCerts
I0805 18:31:52.767981 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0805 18:31:52.768017 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.772253 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.772696 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.772738 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.772914 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.773163 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.773349 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.773493 69206 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa Username:docker}
I0805 18:31:52.855319 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0805 18:31:52.878490 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0805 18:31:52.900455 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0805 18:31:52.922367 69206 provision.go:87] duration metric: took 337.58908ms to configureAuth
I0805 18:31:52.922397 69206 buildroot.go:189] setting minikube options for container-runtime
I0805 18:31:52.922584 69206 config.go:182] Loaded profile config "old-k8s-version-336753": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0805 18:31:52.922609 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:52.922897 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:52.925448 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.925857 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:52.925886 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:52.926051 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:52.926236 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.926383 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:52.926485 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:52.926655 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:52.926841 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:52.926854 69206 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0805 18:31:53.025041 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0805 18:31:53.025062 69206 buildroot.go:70] root file system type: tmpfs
I0805 18:31:53.025150 69206 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0805 18:31:53.025179 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:53.027866 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:53.028202 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:53.028235 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:53.028455 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:53.028665 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:53.028847 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:53.028949 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:53.029147 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:53.029324 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:53.029386 69206 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0805 18:31:53.141588 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0805 18:31:53.141627 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:53.144729 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:53.145117 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:53.145146 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:53.145463 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:53.145680 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:53.145865 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:53.145999 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:53.146127 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:53.146306 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:53.146324 69206 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0805 18:31:54.967065 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0805 18:31:54.967088 69206 machine.go:97] duration metric: took 2.710819429s to provisionDockerMachine
I0805 18:31:54.967100 69206 start.go:293] postStartSetup for "old-k8s-version-336753" (driver="kvm2")
I0805 18:31:54.967110 69206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0805 18:31:54.967134 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:54.967464 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0805 18:31:54.967490 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:54.970377 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:54.970839 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:54.970862 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:54.970998 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:54.971243 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:54.971421 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:54.971572 69206 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa Username:docker}
I0805 18:31:55.050716 69206 ssh_runner.go:195] Run: cat /etc/os-release
I0805 18:31:55.054990 69206 info.go:137] Remote host: Buildroot 2023.02.9
I0805 18:31:55.055023 69206 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/addons for local assets ...
I0805 18:31:55.055105 69206 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/files for local assets ...
I0805 18:31:55.055219 69206 filesync.go:149] local asset: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem -> 125812.pem in /etc/ssl/certs
I0805 18:31:55.055471 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0805 18:31:55.066732 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:31:55.090714 69206 start.go:296] duration metric: took 123.598653ms for postStartSetup
I0805 18:31:55.090762 69206 fix.go:56] duration metric: took 22.994225557s for fixHost
I0805 18:31:55.090781 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:55.093783 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.094193 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:55.094218 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.094447 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:55.094656 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:55.094850 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:55.095008 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:55.095161 69206 main.go:141] libmachine: Using SSH client type: native
I0805 18:31:55.095349 69206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.61.245 22 <nil> <nil>}
I0805 18:31:55.095362 69206 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0805 18:31:55.196538 69206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722882715.175783349
I0805 18:31:55.196565 69206 fix.go:216] guest clock: 1722882715.175783349
I0805 18:31:55.196575 69206 fix.go:229] Guest: 2024-08-05 18:31:55.175783349 +0000 UTC Remote: 2024-08-05 18:31:55.090766447 +0000 UTC m=+39.421747672 (delta=85.016902ms)
I0805 18:31:55.196598 69206 fix.go:200] guest clock delta is within tolerance: 85.016902ms
I0805 18:31:55.196603 69206 start.go:83] releasing machines lock for "old-k8s-version-336753", held for 23.100096865s
I0805 18:31:55.196628 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:55.196922 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetIP
I0805 18:31:55.200016 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.200424 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:55.200453 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.200685 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:55.201212 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:55.201402 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .DriverName
I0805 18:31:55.201486 69206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0805 18:31:55.201526 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:55.201587 69206 ssh_runner.go:195] Run: cat /version.json
I0805 18:31:55.201611 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHHostname
I0805 18:31:55.204078 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.204389 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.204460 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:55.204487 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.204689 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:55.204786 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:55.204823 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:55.204860 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:55.204982 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHPort
I0805 18:31:55.205052 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:55.205132 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHKeyPath
I0805 18:31:55.205192 69206 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa Username:docker}
I0805 18:31:55.205265 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetSSHUsername
I0805 18:31:55.205379 69206 sshutil.go:53] new ssh client: &{IP:192.168.61.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/old-k8s-version-336753/id_rsa Username:docker}
I0805 18:31:55.304405 69206 ssh_runner.go:195] Run: systemctl --version
I0805 18:31:55.311560 69206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0805 18:31:55.318082 69206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0805 18:31:55.318187 69206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0805 18:31:55.329393 69206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0805 18:31:55.345088 69206 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0805 18:31:55.345122 69206 start.go:495] detecting cgroup driver to use...
I0805 18:31:55.345250 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:31:55.381351 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0805 18:31:55.392454 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0805 18:31:55.404966 69206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0805 18:31:55.405029 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0805 18:31:55.415739 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:31:55.426035 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0805 18:31:55.437024 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:31:55.448071 69206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0805 18:31:55.459828 69206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0805 18:31:55.470757 69206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0805 18:31:55.483022 69206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0805 18:31:55.495192 69206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:55.614841 69206 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0805 18:31:55.640006 69206 start.go:495] detecting cgroup driver to use...
I0805 18:31:55.640126 69206 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0805 18:31:55.655242 69206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:31:55.669017 69206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0805 18:31:55.686891 69206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:31:55.700698 69206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:31:53.857159 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:55.858324 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:55.225144 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .Start
I0805 18:31:55.225355 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Ensuring networks are active...
I0805 18:31:55.226131 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Ensuring network default is active
I0805 18:31:55.226477 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Ensuring network mk-default-k8s-diff-port-466451 is active
I0805 18:31:55.226847 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Getting domain xml...
I0805 18:31:55.227665 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Creating domain...
I0805 18:31:56.585890 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Waiting to get IP...
I0805 18:31:56.586971 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:56.587528 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:56.587647 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:56.587517 69763 retry.go:31] will retry after 201.625509ms: waiting for machine to come up
I0805 18:31:56.791230 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:56.792000 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:56.792020 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:56.791923 69763 retry.go:31] will retry after 330.212805ms: waiting for machine to come up
I0805 18:31:57.123497 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:57.124072 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:57.124096 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:57.124025 69763 retry.go:31] will retry after 402.812867ms: waiting for machine to come up
I0805 18:31:57.528659 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:57.529242 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:57.529271 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:57.529210 69763 retry.go:31] will retry after 561.907384ms: waiting for machine to come up
I0805 18:31:55.714682 69206 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0805 18:31:55.741014 69206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:31:55.755319 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:31:55.775684 69206 ssh_runner.go:195] Run: which cri-dockerd
I0805 18:31:55.779941 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0805 18:31:55.789342 69206 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0805 18:31:55.808377 69206 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0805 18:31:55.932404 69206 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0805 18:31:56.075906 69206 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0805 18:31:56.076042 69206 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0805 18:31:56.097153 69206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:31:56.217350 69206 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 18:31:58.659437 69206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.442043025s)
I0805 18:31:58.659516 69206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:31:58.690342 69206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:31:58.714352 69206 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 27.1.1 ...
I0805 18:31:58.714412 69206 main.go:141] libmachine: (old-k8s-version-336753) Calling .GetIP
I0805 18:31:58.717650 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:58.718109 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:bf:8c", ip: ""} in network mk-old-k8s-version-336753: {Iface:virbr3 ExpiryTime:2024-08-05 19:31:42 +0000 UTC Type:0 Mac:52:54:00:54:bf:8c Iaid: IPaddr:192.168.61.245 Prefix:24 Hostname:old-k8s-version-336753 Clientid:01:52:54:00:54:bf:8c}
I0805 18:31:58.718141 69206 main.go:141] libmachine: (old-k8s-version-336753) DBG | domain old-k8s-version-336753 has defined IP address 192.168.61.245 and MAC address 52:54:00:54:bf:8c in network mk-old-k8s-version-336753
I0805 18:31:58.718381 69206 ssh_runner.go:195] Run: grep 192.168.61.1 host.minikube.internal$ /etc/hosts
I0805 18:31:58.722795 69206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:31:58.736421 69206 kubeadm.go:883] updating cluster {Name:old-k8s-version-336753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-336753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet:
MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0805 18:31:58.736543 69206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0805 18:31:58.736587 69206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:31:58.758990 69206 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0805 18:31:58.759010 69206 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
I0805 18:31:58.759055 69206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0805 18:31:58.769968 69206 ssh_runner.go:195] Run: which lz4
I0805 18:31:58.775304 69206 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0805 18:31:58.780343 69206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0805 18:31:58.780374 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
I0805 18:32:00.198025 69206 docker.go:649] duration metric: took 1.422765501s to copy over tarball
I0805 18:32:00.198118 69206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0805 18:31:58.359449 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:00.359839 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:31:58.093445 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:58.094003 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:58.094036 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:58.093934 69763 retry.go:31] will retry after 569.068607ms: waiting for machine to come up
I0805 18:31:58.664259 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:58.664996 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:58.665030 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:58.664943 69763 retry.go:31] will retry after 844.153352ms: waiting for machine to come up
I0805 18:31:59.510670 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:31:59.511274 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:31:59.511303 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:31:59.511250 69763 retry.go:31] will retry after 1.040034813s: waiting for machine to come up
I0805 18:32:00.553440 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:00.554135 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:00.554167 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:00.554079 69763 retry.go:31] will retry after 1.210960125s: waiting for machine to come up
I0805 18:32:01.766775 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:01.767529 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:01.767560 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:01.767470 69763 retry.go:31] will retry after 1.822151774s: waiting for machine to come up
I0805 18:32:02.837145 69206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.638997048s)
I0805 18:32:02.837181 69206 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0805 18:32:02.878191 69206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0805 18:32:02.889381 69206 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2972 bytes)
I0805 18:32:02.906636 69206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:03.024121 69206 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 18:32:02.860395 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:05.379381 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:03.590935 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:03.591437 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:03.591472 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:03.591414 69763 retry.go:31] will retry after 1.723765385s: waiting for machine to come up
I0805 18:32:05.316828 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:05.317324 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:05.317350 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:05.317277 69763 retry.go:31] will retry after 2.077508001s: waiting for machine to come up
I0805 18:32:07.397710 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:07.398442 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:07.398485 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:07.398403 69763 retry.go:31] will retry after 2.45202207s: waiting for machine to come up
I0805 18:32:05.909404 69206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.885234302s)
I0805 18:32:05.909500 69206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:32:05.931711 69206 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0805 18:32:05.931735 69206 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
I0805 18:32:05.931743 69206 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
I0805 18:32:05.933268 69206 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0805 18:32:05.933514 69206 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
I0805 18:32:05.933732 69206 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:05.934045 69206 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0805 18:32:05.934122 69206 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0805 18:32:05.934263 69206 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
I0805 18:32:05.934411 69206 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:05.935029 69206 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:05.935058 69206 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:05.935254 69206 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0805 18:32:05.935533 69206 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:05.935620 69206 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
I0805 18:32:05.935772 69206 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:05.935814 69206 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:05.936611 69206 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
I0805 18:32:05.936771 69206 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:06.074619 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:06.093909 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
I0805 18:32:06.096435 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:06.098615 69206 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
I0805 18:32:06.098654 69206 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:06.098687 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
I0805 18:32:06.099664 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:06.110280 69206 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
I0805 18:32:06.110326 69206 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
I0805 18:32:06.110370 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
I0805 18:32:06.112470 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:06.119997 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
I0805 18:32:06.162118 69206 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
I0805 18:32:06.162170 69206 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:06.162221 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
I0805 18:32:06.175405 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
I0805 18:32:06.175525 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
I0805 18:32:06.175522 69206 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
I0805 18:32:06.175606 69206 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:06.175690 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
I0805 18:32:06.184770 69206 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
I0805 18:32:06.184827 69206 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:06.184879 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
I0805 18:32:06.198273 69206 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
I0805 18:32:06.198322 69206 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
I0805 18:32:06.198377 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
I0805 18:32:06.218138 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
I0805 18:32:06.218204 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
I0805 18:32:06.224276 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0805 18:32:06.224462 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
I0805 18:32:06.228029 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
I0805 18:32:06.242302 69206 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0805 18:32:06.242353 69206 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0805 18:32:06.242449 69206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0805 18:32:06.259267 69206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0805 18:32:06.550434 69206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0805 18:32:06.568355 69206 cache_images.go:92] duration metric: took 636.595171ms to LoadCachedImages
W0805 18:32:06.568493 69206 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19374-5415/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
I0805 18:32:06.568511 69206 kubeadm.go:934] updating node { 192.168.61.245 8443 v1.20.0 docker true true} ...
I0805 18:32:06.568643 69206 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-336753 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.245
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-336753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0805 18:32:06.568721 69206 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0805 18:32:06.627890 69206 cni.go:84] Creating CNI manager for ""
I0805 18:32:06.627934 69206 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0805 18:32:06.627946 69206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0805 18:32:06.627968 69206 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-336753 NodeName:old-k8s-version-336753 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0805 18:32:06.628110 69206 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.61.245
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "old-k8s-version-336753"
kubeletExtraArgs:
node-ip: 192.168.61.245
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.61.245"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0805 18:32:06.628169 69206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0805 18:32:06.638230 69206 binaries.go:44] Found k8s binaries, skipping transfer
I0805 18:32:06.638309 69206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0805 18:32:06.647838 69206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
I0805 18:32:06.665925 69206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0805 18:32:06.682790 69206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0805 18:32:06.700171 69206 ssh_runner.go:195] Run: grep 192.168.61.245 control-plane.minikube.internal$ /etc/hosts
I0805 18:32:06.703832 69206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.245 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:32:06.716333 69206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:06.839269 69206 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 18:32:06.858244 69206 certs.go:68] Setting up /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753 for IP: 192.168.61.245
I0805 18:32:06.858266 69206 certs.go:194] generating shared ca certs ...
I0805 18:32:06.858283 69206 certs.go:226] acquiring lock for ca certs: {Name:mkd5950c6b2de2854a748470350a45601540dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:32:06.858443 69206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key
I0805 18:32:06.858531 69206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key
I0805 18:32:06.858547 69206 certs.go:256] generating profile certs ...
I0805 18:32:06.858663 69206 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/client.key
I0805 18:32:06.858754 69206 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/apiserver.key.cc820c21
I0805 18:32:06.858806 69206 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/proxy-client.key
I0805 18:32:06.858961 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem (1338 bytes)
W0805 18:32:06.859002 69206 certs.go:480] ignoring /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581_empty.pem, impossibly tiny 0 bytes
I0805 18:32:06.859017 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem (1679 bytes)
I0805 18:32:06.859055 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem (1082 bytes)
I0805 18:32:06.859093 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem (1123 bytes)
I0805 18:32:06.859139 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem (1679 bytes)
I0805 18:32:06.859200 69206 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:32:06.860050 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0805 18:32:06.915064 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0805 18:32:06.946956 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0805 18:32:06.984254 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0805 18:32:07.018204 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0805 18:32:07.054142 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0805 18:32:07.081443 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0805 18:32:07.108923 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/old-k8s-version-336753/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0805 18:32:07.137147 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /usr/share/ca-certificates/125812.pem (1708 bytes)
I0805 18:32:07.167904 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0805 18:32:07.193564 69206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem --> /usr/share/ca-certificates/12581.pem (1338 bytes)
I0805 18:32:07.218353 69206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0805 18:32:07.235064 69206 ssh_runner.go:195] Run: openssl version
I0805 18:32:07.240645 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125812.pem && ln -fs /usr/share/ca-certificates/125812.pem /etc/ssl/certs/125812.pem"
I0805 18:32:07.251142 69206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125812.pem
I0805 18:32:07.255460 69206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 5 17:34 /usr/share/ca-certificates/125812.pem
I0805 18:32:07.255522 69206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125812.pem
I0805 18:32:07.261517 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125812.pem /etc/ssl/certs/3ec20f2e.0"
I0805 18:32:07.272659 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0805 18:32:07.284014 69206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:07.288617 69206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 5 17:27 /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:07.288674 69206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:07.294475 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0805 18:32:07.305197 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12581.pem && ln -fs /usr/share/ca-certificates/12581.pem /etc/ssl/certs/12581.pem"
I0805 18:32:07.315815 69206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12581.pem
I0805 18:32:07.320271 69206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 5 17:34 /usr/share/ca-certificates/12581.pem
I0805 18:32:07.320348 69206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12581.pem
I0805 18:32:07.325863 69206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12581.pem /etc/ssl/certs/51391683.0"
I0805 18:32:07.337310 69206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0805 18:32:07.341694 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0805 18:32:07.347758 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0805 18:32:07.353646 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0805 18:32:07.360573 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0805 18:32:07.366277 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0805 18:32:07.371972 69206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0805 18:32:07.377629 69206 kubeadm.go:392] StartCluster: {Name:old-k8s-version-336753 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-336753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:32:07.377755 69206 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:32:07.399602 69206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0805 18:32:07.409778 69206 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0805 18:32:07.409797 69206 kubeadm.go:593] restartPrimaryControlPlane start ...
I0805 18:32:07.409868 69206 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0805 18:32:07.419469 69206 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0805 18:32:07.420169 69206 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-336753" does not appear in /home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:32:07.420515 69206 kubeconfig.go:62] /home/jenkins/minikube-integration/19374-5415/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-336753" cluster setting kubeconfig missing "old-k8s-version-336753" context setting]
I0805 18:32:07.421105 69206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19374-5415/kubeconfig: {Name:mk625b9ea6f09360b6a4e9f50277b2927e24bcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:32:07.422415 69206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0805 18:32:07.431823 69206 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.245
I0805 18:32:07.431855 69206 kubeadm.go:1160] stopping kube-system containers ...
I0805 18:32:07.431907 69206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:32:07.451423 69206 docker.go:483] Stopping containers: [e68122d164ec ce669064e09b 766797baaa4f fa41201b5f96 44149676ddea f91944446f59 9ed94d80b93d 690ac9b998c7 237dc0dd0e18 d2b74079b40b 1af3a8bc4cd6 16b126554787 e4b8eb5a542a 6365e48ae40b]
I0805 18:32:07.451503 69206 ssh_runner.go:195] Run: docker stop e68122d164ec ce669064e09b 766797baaa4f fa41201b5f96 44149676ddea f91944446f59 9ed94d80b93d 690ac9b998c7 237dc0dd0e18 d2b74079b40b 1af3a8bc4cd6 16b126554787 e4b8eb5a542a 6365e48ae40b
I0805 18:32:07.471832 69206 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0805 18:32:07.487257 69206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0805 18:32:07.497933 69206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0805 18:32:07.497966 69206 kubeadm.go:157] found existing configuration files:
I0805 18:32:07.498027 69206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0805 18:32:07.507792 69206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0805 18:32:07.507863 69206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0805 18:32:07.518079 69206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0805 18:32:07.527296 69206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0805 18:32:07.527349 69206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0805 18:32:07.537173 69206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0805 18:32:07.547338 69206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0805 18:32:07.547405 69206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0805 18:32:07.557712 69206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0805 18:32:07.567470 69206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0805 18:32:07.567539 69206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0805 18:32:07.577501 69206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0805 18:32:07.587276 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:07.754599 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:08.725201 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:08.995809 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:09.189775 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:09.385525 69206 api_server.go:52] waiting for apiserver process to appear ...
I0805 18:32:09.385628 69206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:32:09.886448 69206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:32:10.385757 69206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:32:07.858899 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:09.859165 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:12.360049 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:09.852915 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:09.853493 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | unable to find current IP address of domain default-k8s-diff-port-466451 in network mk-default-k8s-diff-port-466451
I0805 18:32:09.853526 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | I0805 18:32:09.853440 69763 retry.go:31] will retry after 4.448346046s: waiting for machine to come up
I0805 18:32:10.885726 69206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 18:32:10.913680 69206 api_server.go:72] duration metric: took 1.528156079s to wait for apiserver process to appear ...
I0805 18:32:10.913712 69206 api_server.go:88] waiting for apiserver healthz status ...
I0805 18:32:10.913739 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:10.914167 69206 api_server.go:269] stopped: https://192.168.61.245:8443/healthz: Get "https://192.168.61.245:8443/healthz": dial tcp 192.168.61.245:8443: connect: connection refused
I0805 18:32:11.414026 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:15.461466 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0805 18:32:15.461494 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0805 18:32:15.461505 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:15.487427 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0805 18:32:15.487458 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0805 18:32:14.859509 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:17.358250 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:14.305410 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.305964 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has current primary IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.305999 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Found IP for machine: 192.168.72.196
I0805 18:32:14.306013 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Reserving static IP address...
I0805 18:32:14.306514 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-466451", mac: "52:54:00:4d:3f:ba", ip: "192.168.72.196"} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.306550 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | skip adding static IP to network mk-default-k8s-diff-port-466451 - found existing host DHCP lease matching {name: "default-k8s-diff-port-466451", mac: "52:54:00:4d:3f:ba", ip: "192.168.72.196"}
I0805 18:32:14.306566 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Reserved static IP address: 192.168.72.196
I0805 18:32:14.306615 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Waiting for SSH to be available...
I0805 18:32:14.306661 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | Getting to WaitForSSH function...
I0805 18:32:14.308543 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.308917 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.308968 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.309203 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | Using SSH client type: external
I0805 18:32:14.309231 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | Using SSH private key: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa (-rw-------)
I0805 18:32:14.309257 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa -p 22] /usr/bin/ssh <nil>}
I0805 18:32:14.309274 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | About to run SSH command:
I0805 18:32:14.309289 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | exit 0
I0805 18:32:14.431741 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | SSH cmd err, output: <nil>:
I0805 18:32:14.432097 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetConfigRaw
I0805 18:32:14.432837 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetIP
I0805 18:32:14.435671 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.436109 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.436156 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.436428 69364 profile.go:143] Saving config to /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/config.json ...
I0805 18:32:14.436629 69364 machine.go:94] provisionDockerMachine start ...
I0805 18:32:14.436649 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:14.436922 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:14.439272 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.439651 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.439698 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.439778 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:14.439969 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.440144 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.440296 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:14.440454 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:14.440629 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:14.440640 69364 main.go:141] libmachine: About to run SSH command:
hostname
I0805 18:32:14.543996 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0805 18:32:14.544030 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetMachineName
I0805 18:32:14.544300 69364 buildroot.go:166] provisioning hostname "default-k8s-diff-port-466451"
I0805 18:32:14.544330 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetMachineName
I0805 18:32:14.544535 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:14.547476 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.547928 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.547963 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.548171 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:14.548403 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.548590 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.548775 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:14.548960 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:14.549183 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:14.549203 69364 main.go:141] libmachine: About to run SSH command:
sudo hostname default-k8s-diff-port-466451 && echo "default-k8s-diff-port-466451" | sudo tee /etc/hostname
I0805 18:32:14.661643 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-466451
I0805 18:32:14.661682 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:14.664635 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.665021 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.665064 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.665253 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:14.665448 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.665687 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:14.665904 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:14.666115 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:14.666292 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:14.666310 69364 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sdefault-k8s-diff-port-466451' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-466451/g' /etc/hosts;
else
echo '127.0.1.1 default-k8s-diff-port-466451' | sudo tee -a /etc/hosts;
fi
fi
I0805 18:32:14.777165 69364 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0805 18:32:14.777208 69364 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19374-5415/.minikube CaCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19374-5415/.minikube}
I0805 18:32:14.777258 69364 buildroot.go:174] setting up certificates
I0805 18:32:14.777274 69364 provision.go:84] configureAuth start
I0805 18:32:14.777292 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetMachineName
I0805 18:32:14.777624 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetIP
I0805 18:32:14.780382 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.780816 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.780846 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.781018 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:14.783718 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.784083 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:14.784098 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:14.784272 69364 provision.go:143] copyHostCerts
I0805 18:32:14.784325 69364 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem, removing ...
I0805 18:32:14.784334 69364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem
I0805 18:32:14.784402 69364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/cert.pem (1123 bytes)
I0805 18:32:14.784533 69364 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem, removing ...
I0805 18:32:14.784551 69364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem
I0805 18:32:14.784574 69364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/key.pem (1679 bytes)
I0805 18:32:14.784624 69364 exec_runner.go:144] found /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem, removing ...
I0805 18:32:14.784631 69364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem
I0805 18:32:14.784648 69364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19374-5415/.minikube/ca.pem (1082 bytes)
I0805 18:32:14.784721 69364 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-466451 san=[127.0.0.1 192.168.72.196 default-k8s-diff-port-466451 localhost minikube]
I0805 18:32:15.079259 69364 provision.go:177] copyRemoteCerts
I0805 18:32:15.079326 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0805 18:32:15.079354 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:15.082718 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.083129 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:15.083161 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.083322 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:15.083551 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.083745 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:15.083962 69364 sshutil.go:53] new ssh client: &{IP:192.168.72.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa Username:docker}
I0805 18:32:15.165749 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0805 18:32:15.190499 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0805 18:32:15.214983 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0805 18:32:15.240118 69364 provision.go:87] duration metric: took 462.826686ms to configureAuth
I0805 18:32:15.240156 69364 buildroot.go:189] setting minikube options for container-runtime
I0805 18:32:15.240385 69364 config.go:182] Loaded profile config "default-k8s-diff-port-466451": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 18:32:15.240413 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:15.240694 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:15.243334 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.243778 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:15.243804 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.244001 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:15.244200 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.244372 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.244490 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:15.244702 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:15.244915 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:15.244933 69364 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0805 18:32:15.345449 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0805 18:32:15.345481 69364 buildroot.go:70] root file system type: tmpfs
I0805 18:32:15.345608 69364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0805 18:32:15.345636 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:15.349419 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.349774 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:15.349822 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.350095 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:15.350290 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.350485 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.350616 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:15.350780 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:15.351013 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:15.351085 69364 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0805 18:32:15.468651 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0805 18:32:15.468678 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:15.471891 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.472304 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:15.472337 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:15.472593 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:15.472795 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.472972 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:15.473133 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:15.473319 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:15.473533 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:15.473562 69364 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0805 18:32:17.329517 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0805 18:32:17.329554 69364 machine.go:97] duration metric: took 2.892911259s to provisionDockerMachine
I0805 18:32:17.329569 69364 start.go:293] postStartSetup for "default-k8s-diff-port-466451" (driver="kvm2")
I0805 18:32:17.329580 69364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0805 18:32:17.329601 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.329958 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0805 18:32:17.329985 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:17.332926 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.333353 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.333387 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.333569 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:17.333774 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.333949 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:17.334088 69364 sshutil.go:53] new ssh client: &{IP:192.168.72.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa Username:docker}
I0805 18:32:17.414167 69364 ssh_runner.go:195] Run: cat /etc/os-release
I0805 18:32:17.418295 69364 info.go:137] Remote host: Buildroot 2023.02.9
I0805 18:32:17.418325 69364 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/addons for local assets ...
I0805 18:32:17.418399 69364 filesync.go:126] Scanning /home/jenkins/minikube-integration/19374-5415/.minikube/files for local assets ...
I0805 18:32:17.418516 69364 filesync.go:149] local asset: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem -> 125812.pem in /etc/ssl/certs
I0805 18:32:17.418642 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0805 18:32:17.429206 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:32:17.457903 69364 start.go:296] duration metric: took 128.31976ms for postStartSetup
I0805 18:32:17.457973 69364 fix.go:56] duration metric: took 22.261151421s for fixHost
I0805 18:32:17.457998 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:17.460758 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.461200 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.461231 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.461338 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:17.461569 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.461759 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.461907 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:17.462081 69364 main.go:141] libmachine: Using SSH client type: native
I0805 18:32:17.462298 69364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil> [] 0s} 192.168.72.196 22 <nil> <nil>}
I0805 18:32:17.462314 69364 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0805 18:32:17.565114 69364 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722882737.539804707
I0805 18:32:17.565139 69364 fix.go:216] guest clock: 1722882737.539804707
I0805 18:32:17.565149 69364 fix.go:229] Guest: 2024-08-05 18:32:17.539804707 +0000 UTC Remote: 2024-08-05 18:32:17.45797871 +0000 UTC m=+49.453468695 (delta=81.825997ms)
I0805 18:32:17.565167 69364 fix.go:200] guest clock delta is within tolerance: 81.825997ms
I0805 18:32:17.565172 69364 start.go:83] releasing machines lock for "default-k8s-diff-port-466451", held for 22.368402757s
I0805 18:32:17.565191 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.565488 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetIP
I0805 18:32:17.568934 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.569306 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.569336 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.569641 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.570224 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.570449 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .DriverName
I0805 18:32:17.570557 69364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0805 18:32:17.570601 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:17.570710 69364 ssh_runner.go:195] Run: cat /version.json
I0805 18:32:17.570759 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHHostname
I0805 18:32:17.573572 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.573877 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.573947 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.573973 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.574140 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:17.574360 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.574364 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:17.574413 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:17.574444 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHPort
I0805 18:32:17.574559 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:17.574634 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHKeyPath
I0805 18:32:17.574712 69364 sshutil.go:53] new ssh client: &{IP:192.168.72.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa Username:docker}
I0805 18:32:17.574805 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetSSHUsername
I0805 18:32:17.574940 69364 sshutil.go:53] new ssh client: &{IP:192.168.72.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19374-5415/.minikube/machines/default-k8s-diff-port-466451/id_rsa Username:docker}
I0805 18:32:17.678350 69364 ssh_runner.go:195] Run: systemctl --version
I0805 18:32:17.685777 69364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0805 18:32:17.692712 69364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0805 18:32:17.692793 69364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0805 18:32:17.711795 69364 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0805 18:32:17.711824 69364 start.go:495] detecting cgroup driver to use...
I0805 18:32:17.711954 69364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:32:17.731288 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0805 18:32:17.745925 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0805 18:32:17.756636 69364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0805 18:32:17.756726 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0805 18:32:17.768106 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:32:17.780797 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0805 18:32:17.794593 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 18:32:17.807124 69364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0805 18:32:17.817661 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0805 18:32:17.828041 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0805 18:32:17.839068 69364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0805 18:32:17.850068 69364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0805 18:32:17.859839 69364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0805 18:32:17.869726 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:17.997849 69364 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0805 18:32:18.024955 69364 start.go:495] detecting cgroup driver to use...
I0805 18:32:18.025032 69364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0805 18:32:15.914556 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:16.024697 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0805 18:32:16.024747 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0805 18:32:16.414227 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:16.430439 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0805 18:32:16.430483 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0805 18:32:16.914810 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:16.923950 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0805 18:32:16.923980 69206 api_server.go:103] status: https://192.168.61.245:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0805 18:32:17.414633 69206 api_server.go:253] Checking apiserver healthz at https://192.168.61.245:8443/healthz ...
I0805 18:32:17.422101 69206 api_server.go:279] https://192.168.61.245:8443/healthz returned 200:
ok
I0805 18:32:17.431117 69206 api_server.go:141] control plane version: v1.20.0
I0805 18:32:17.431301 69206 api_server.go:131] duration metric: took 6.517577987s to wait for apiserver health ...
I0805 18:32:17.431480 69206 cni.go:84] Creating CNI manager for ""
I0805 18:32:17.431505 69206 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0805 18:32:17.431519 69206 system_pods.go:43] waiting for kube-system pods to appear ...
I0805 18:32:17.441539 69206 system_pods.go:59] 7 kube-system pods found
I0805 18:32:17.441565 69206 system_pods.go:61] "coredns-74ff55c5b-np6jj" [0d5e9a18-1480-4732-b21a-df2a982c5e4d] Running
I0805 18:32:17.441574 69206 system_pods.go:61] "etcd-old-k8s-version-336753" [5e1193a0-9fbb-4a53-b35e-f0d47c003742] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0805 18:32:17.441583 69206 system_pods.go:61] "kube-apiserver-old-k8s-version-336753" [d4b24340-8f76-4454-b4f4-366afcff1baa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0805 18:32:17.441589 69206 system_pods.go:61] "kube-controller-manager-old-k8s-version-336753" [ea7c5a0b-5bc8-4a3d-8319-76d1372d6140] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0805 18:32:17.441596 69206 system_pods.go:61] "kube-proxy-wsr6r" [d5fab68a-44c2-4740-ae33-5ce3884921e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0805 18:32:17.441602 69206 system_pods.go:61] "kube-scheduler-old-k8s-version-336753" [329f71f6-39db-4cf4-aa1e-aa555f5e787f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0805 18:32:17.441608 69206 system_pods.go:61] "storage-provisioner" [5bb92b05-c903-4c3d-a0fc-903e2bd0b9a5] Running
I0805 18:32:17.441614 69206 system_pods.go:74] duration metric: took 10.087626ms to wait for pod list to return data ...
I0805 18:32:17.441620 69206 node_conditions.go:102] verifying NodePressure condition ...
I0805 18:32:17.445294 69206 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0805 18:32:17.445319 69206 node_conditions.go:123] node cpu capacity is 2
I0805 18:32:17.445330 69206 node_conditions.go:105] duration metric: took 3.705521ms to run NodePressure ...
I0805 18:32:17.445345 69206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0805 18:32:17.833277 69206 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0805 18:32:17.837109 69206 kubeadm.go:739] kubelet initialised
I0805 18:32:17.837131 69206 kubeadm.go:740] duration metric: took 3.829485ms waiting for restarted kubelet to initialise ...
I0805 18:32:17.837138 69206 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0805 18:32:17.842936 69206 pod_ready.go:78] waiting up to 4m0s for pod "coredns-74ff55c5b-np6jj" in "kube-system" namespace to be "Ready" ...
I0805 18:32:19.849745 69206 pod_ready.go:102] pod "coredns-74ff55c5b-np6jj" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:18.040183 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:32:18.054408 69364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0805 18:32:18.073484 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0805 18:32:18.090445 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:32:18.108238 69364 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0805 18:32:18.138951 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 18:32:18.154636 69364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 18:32:18.173519 69364 ssh_runner.go:195] Run: which cri-dockerd
I0805 18:32:18.177517 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0805 18:32:18.186888 69364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0805 18:32:18.204195 69364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0805 18:32:18.316077 69364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0805 18:32:18.446649 69364 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0805 18:32:18.446844 69364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0805 18:32:18.464297 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:18.584434 69364 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 18:32:21.030295 69364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.445826481s)
I0805 18:32:21.030370 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0805 18:32:21.045090 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 18:32:21.061908 69364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0805 18:32:21.207689 69364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0805 18:32:21.338300 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:21.468001 69364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0805 18:32:21.491827 69364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 18:32:21.515257 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:21.673908 69364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0805 18:32:21.772773 69364 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0805 18:32:21.772848 69364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0805 18:32:21.779915 69364 start.go:563] Will wait 60s for crictl version
I0805 18:32:21.779976 69364 ssh_runner.go:195] Run: which crictl
I0805 18:32:21.785790 69364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0805 18:32:21.833059 69364 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0805 18:32:21.833125 69364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:32:21.864226 69364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 18:32:19.358439 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:21.358853 68580 pod_ready.go:102] pod "metrics-server-6867b74b74-829pz" in "kube-system" namespace has status "Ready":"False"
I0805 18:32:21.892931 69364 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
I0805 18:32:21.892980 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) Calling .GetIP
I0805 18:32:21.896494 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:21.896775 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:3f:ba", ip: ""} in network mk-default-k8s-diff-port-466451: {Iface:virbr4 ExpiryTime:2024-08-05 19:32:06 +0000 UTC Type:0 Mac:52:54:00:4d:3f:ba Iaid: IPaddr:192.168.72.196 Prefix:24 Hostname:default-k8s-diff-port-466451 Clientid:01:52:54:00:4d:3f:ba}
I0805 18:32:21.896812 69364 main.go:141] libmachine: (default-k8s-diff-port-466451) DBG | domain default-k8s-diff-port-466451 has defined IP address 192.168.72.196 and MAC address 52:54:00:4d:3f:ba in network mk-default-k8s-diff-port-466451
I0805 18:32:21.897915 69364 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0805 18:32:21.903549 69364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:32:21.915901 69364 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-466451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-466451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.196 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: N
etwork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0805 18:32:21.916023 69364 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 18:32:21.916076 69364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:32:21.934878 69364 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0805 18:32:21.934901 69364 docker.go:615] Images already preloaded, skipping extraction
I0805 18:32:21.934991 69364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 18:32:21.954784 69364 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0805 18:32:21.954809 69364 cache_images.go:84] Images are preloaded, skipping loading
I0805 18:32:21.954821 69364 kubeadm.go:934] updating node { 192.168.72.196 8444 v1.30.3 docker true true} ...
I0805 18:32:21.954946 69364 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-466451 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.196
[Install]
config:
{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-466451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0805 18:32:21.955022 69364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0805 18:32:22.020698 69364 cni.go:84] Creating CNI manager for ""
I0805 18:32:22.020719 69364 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 18:32:22.020728 69364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0805 18:32:22.020752 69364 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.196 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-466451 NodeName:default-k8s-diff-port-466451 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0805 18:32:22.020977 69364 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.196
bindPort: 8444
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "default-k8s-diff-port-466451"
kubeletExtraArgs:
node-ip: 192.168.72.196
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.196"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8444
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0805 18:32:22.021051 69364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
I0805 18:32:22.034756 69364 binaries.go:44] Found k8s binaries, skipping transfer
I0805 18:32:22.034823 69364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0805 18:32:22.045857 69364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
I0805 18:32:22.064101 69364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0805 18:32:22.084855 69364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
I0805 18:32:22.105841 69364 ssh_runner.go:195] Run: grep 192.168.72.196 control-plane.minikube.internal$ /etc/hosts
I0805 18:32:22.110522 69364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.196 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0805 18:32:22.127069 69364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 18:32:22.294937 69364 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 18:32:22.317243 69364 certs.go:68] Setting up /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451 for IP: 192.168.72.196
I0805 18:32:22.317268 69364 certs.go:194] generating shared ca certs ...
I0805 18:32:22.317288 69364 certs.go:226] acquiring lock for ca certs: {Name:mkd5950c6b2de2854a748470350a45601540dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:32:22.317463 69364 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key
I0805 18:32:22.317519 69364 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key
I0805 18:32:22.317533 69364 certs.go:256] generating profile certs ...
I0805 18:32:22.317642 69364 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/client.key
I0805 18:32:22.317715 69364 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/apiserver.key.a49af6de
I0805 18:32:22.317760 69364 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/proxy-client.key
I0805 18:32:22.317906 69364 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem (1338 bytes)
W0805 18:32:22.317949 69364 certs.go:480] ignoring /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581_empty.pem, impossibly tiny 0 bytes
I0805 18:32:22.317963 69364 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca-key.pem (1679 bytes)
I0805 18:32:22.317993 69364 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/ca.pem (1082 bytes)
I0805 18:32:22.318020 69364 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/cert.pem (1123 bytes)
I0805 18:32:22.318043 69364 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/certs/key.pem (1679 bytes)
I0805 18:32:22.318086 69364 certs.go:484] found cert: /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem (1708 bytes)
I0805 18:32:22.318898 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0805 18:32:22.360240 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0805 18:32:22.396214 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0805 18:32:22.435682 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0805 18:32:22.478618 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0805 18:32:22.517238 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0805 18:32:22.556147 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0805 18:32:22.586479 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/profiles/default-k8s-diff-port-466451/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0805 18:32:22.621184 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/certs/12581.pem --> /usr/share/ca-certificates/12581.pem (1338 bytes)
I0805 18:32:22.656689 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/files/etc/ssl/certs/125812.pem --> /usr/share/ca-certificates/125812.pem (1708 bytes)
I0805 18:32:22.686029 69364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19374-5415/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0805 18:32:22.715686 69364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0805 18:32:22.735182 69364 ssh_runner.go:195] Run: openssl version
I0805 18:32:22.741045 69364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12581.pem && ln -fs /usr/share/ca-certificates/12581.pem /etc/ssl/certs/12581.pem"
I0805 18:32:22.751996 69364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12581.pem
I0805 18:32:22.756269 69364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 5 17:34 /usr/share/ca-certificates/12581.pem
I0805 18:32:22.756333 69364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12581.pem
I0805 18:32:22.762716 69364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12581.pem /etc/ssl/certs/51391683.0"
I0805 18:32:22.773894 69364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125812.pem && ln -fs /usr/share/ca-certificates/125812.pem /etc/ssl/certs/125812.pem"
I0805 18:32:22.789154 69364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125812.pem
I0805 18:32:22.794092 69364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 5 17:34 /usr/share/ca-certificates/125812.pem
I0805 18:32:22.794157 69364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125812.pem
I0805 18:32:22.799932 69364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125812.pem /etc/ssl/certs/3ec20f2e.0"
I0805 18:32:22.811357 69364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0805 18:32:22.822388 69364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:22.827315 69364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 5 17:27 /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:22.827381 69364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0805 18:32:22.833867 69364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0805 18:32:22.844859 69364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0805 18:32:22.850269 69364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0805 18:32:22.856411 69364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0805 18:32:22.864342 69364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0805 18:32:22.873541 69364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0805 18:32:22.881944 69364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0805 18:32:22.888375 69364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0805 18:32:22.895118 69364 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-466451 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-466451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.196 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 18:32:22.895242 69364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:32:22.916622 69364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0805 18:32:22.929219 69364 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0805 18:32:22.929243 69364 kubeadm.go:593] restartPrimaryControlPlane start ...
I0805 18:32:22.929294 69364 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0805 18:32:22.940404 69364 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0805 18:32:22.941238 69364 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-466451" does not appear in /home/jenkins/minikube-integration/19374-5415/kubeconfig
I0805 18:32:22.941731 69364 kubeconfig.go:62] /home/jenkins/minikube-integration/19374-5415/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-466451" cluster setting kubeconfig missing "default-k8s-diff-port-466451" context setting]
I0805 18:32:22.942503 69364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19374-5415/kubeconfig: {Name:mk625b9ea6f09360b6a4e9f50277b2927e24bcde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 18:32:22.944217 69364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0805 18:32:22.957489 69364 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.196
I0805 18:32:22.957529 69364 kubeadm.go:1160] stopping kube-system containers ...
I0805 18:32:22.957604 69364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 18:32:22.983964 69364 docker.go:483] Stopping containers: [683e352fcbdd 05de3173dcc6 0b6f628ef31b 784f49bdea51 ac9c0e05a328 9987470cc71f 6e7ea75795a2 f7df5ff9dca2 59a7d1641024 3747015fffe5 9bb626647025 48ad06c55981 390d33775ad7 f329f22168e2 3a223dfa2e1f 4afcdbd0625e]
I0805 18:32:22.984052 69364 ssh_runner.go:195] Run: docker stop 683e352fcbdd 05de3173dcc6 0b6f628ef31b 784f49bdea51 ac9c0e05a328 9987470cc71f 6e7ea75795a2 f7df5ff9dca2 59a7d1641024 3747015fffe5 9bb626647025 48ad06c55981 390d33775ad7 f329f22168e2 3a223dfa2e1f 4afcdbd0625e
I0805 18:32:23.011974 69364 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0805 18:32:23.035996 69364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
==> Docker <==
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.793841760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.793940392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.828158955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.828347342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.828360674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.828646109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.886259241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.887679850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.887793099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:22 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:22.888501393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:23 newest-cni-006868 cri-dockerd[1111]: time="2024-08-05T18:32:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a16812129cba56400c7ec795b34570c8ddf207abb34691bfe6adfc2f11fe38f/resolv.conf as [nameserver 192.168.122.1]"
Aug 05 18:32:23 newest-cni-006868 cri-dockerd[1111]: time="2024-08-05T18:32:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2006920987ae72bb5ba3e59bef4f6b30b410d9e619584f9e63ecce8317a134a4/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Aug 05 18:32:23 newest-cni-006868 cri-dockerd[1111]: time="2024-08-05T18:32:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e582e7bc69e8c7c0a8222f2d32b06a51d1eb6d61397834c824ae362f5c52301f/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Aug 05 18:32:23 newest-cni-006868 cri-dockerd[1111]: time="2024-08-05T18:32:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c43750e8e74eb72b144d65a50b3ffbca263ec1878cc7c9624095f510da144774/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Aug 05 18:32:23 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:23.757199881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 05 18:32:23 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:23.757497629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 05 18:32:23 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:23.757553166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:23 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:23.757660775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 05 18:32:23 newest-cni-006868 dockerd[843]: time="2024-08-05T18:32:23.964701081Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Aug 05 18:32:24 newest-cni-006868 dockerd[849]: time="2024-08-05T18:32:24.024250238Z" level=error msg="(*service).Write failed" error="rpc error: code = FailedPrecondition desc = unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" expected="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" ref="unknown-sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" total=17821
Aug 05 18:32:24 newest-cni-006868 dockerd[843]: time="2024-08-05T18:32:24.029953595Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Aug 05 18:32:24 newest-cni-006868 dockerd[843]: time="2024-08-05T18:32:24.030080901Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Aug 05 18:32:24 newest-cni-006868 dockerd[843]: time="2024-08-05T18:32:24.030108852Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Aug 05 18:32:24 newest-cni-006868 cri-dockerd[1111]: time="2024-08-05T18:32:24Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
Aug 05 18:32:24 newest-cni-006868 dockerd[843]: time="2024-08-05T18:32:24.276334565Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
2846fee14cc11 cbb01a7bd410d 2 seconds ago Running coredns 1 7a16812129cba coredns-6f6b679f8f-8lr5f
4f78f2304a62e 6e38f40d628db 3 seconds ago Running storage-provisioner 2 ae5c15f7ceec5 storage-provisioner
31b584e307bce 6e38f40d628db 42 seconds ago Exited storage-provisioner 1 ae5c15f7ceec5 storage-provisioner
4e96aea33e5f1 41cec1c4af04c 42 seconds ago Running kube-proxy 1 4ed94f18e8a47 kube-proxy-xqx9t
22c23d2af5ee9 2e96e5913fc06 46 seconds ago Running etcd 1 276cab80ccbe5 etcd-newest-cni-006868
610674a9184a1 0fd085a247d6c 46 seconds ago Running kube-scheduler 1 d829aa7f88ced kube-scheduler-newest-cni-006868
240e0b6feefae fd01d5222f3a9 46 seconds ago Running kube-controller-manager 1 200aca1d3391b kube-controller-manager-newest-cni-006868
17fd99f38c2d0 c7883f2335b7c 46 seconds ago Running kube-apiserver 1 9105e1ecb0ffc kube-apiserver-newest-cni-006868
5134dfdc0d50d cbb01a7bd410d About a minute ago Exited coredns 0 21fb70992ec35 coredns-6f6b679f8f-8lr5f
fbf011536a75c cbb01a7bd410d About a minute ago Exited coredns 0 cac43aae145f0 coredns-6f6b679f8f-88m8m
034b8846cf12c 41cec1c4af04c About a minute ago Exited kube-proxy 0 c9b4e9b85518d kube-proxy-xqx9t
067a823d9b94a 2e96e5913fc06 About a minute ago Exited etcd 0 37ceea586604d etcd-newest-cni-006868
e572d9a1938b6 c7883f2335b7c About a minute ago Exited kube-apiserver 0 50297a33ca66c kube-apiserver-newest-cni-006868
76015da0e4b2d fd01d5222f3a9 About a minute ago Exited kube-controller-manager 0 49ed690f0f0cb kube-controller-manager-newest-cni-006868
ada8531e09d7d 0fd085a247d6c About a minute ago Exited kube-scheduler 0 f4c413a7965b4 kube-scheduler-newest-cni-006868
==> coredns [2846fee14cc1] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:52921 - 51868 "HINFO IN 377631107764616844.3930146501082227140. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020018713s
==> coredns [5134dfdc0d50] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [fbf011536a75] <==
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: newest-cni-006868
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=newest-cni-006868
kubernetes.io/os=linux
minikube.k8s.io/commit=7ab1b4d76a5d87b75cd4b70be3ee81f93304b0ab
minikube.k8s.io/name=newest-cni-006868
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_05T18_30_37_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 05 Aug 2024 18:30:34 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: newest-cni-006868
AcquireTime: <unset>
RenewTime: Mon, 05 Aug 2024 18:32:21 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 05 Aug 2024 18:32:21 +0000 Mon, 05 Aug 2024 18:30:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 05 Aug 2024 18:32:21 +0000 Mon, 05 Aug 2024 18:30:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 05 Aug 2024 18:32:21 +0000 Mon, 05 Aug 2024 18:30:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 05 Aug 2024 18:32:21 +0000 Mon, 05 Aug 2024 18:31:46 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.154
Hostname: newest-cni-006868
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: f47f902bc8a2488abd68588a77f63f29
System UUID: f47f902b-c8a2-488a-bd68-588a77f63f29
Boot ID: 664702a7-1d42-4499-8048-4c37d4979011
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.31.0-rc.0
Kube-Proxy Version:
PodCIDR: 10.42.0.0/24
PodCIDRs: 10.42.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6f6b679f8f-8lr5f 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 105s
kube-system etcd-newest-cni-006868 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 111s
kube-system kube-apiserver-newest-cni-006868 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 110s
kube-system kube-controller-manager-newest-cni-006868 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 110s
kube-system kube-proxy-xqx9t 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 105s
kube-system kube-scheduler-newest-cni-006868 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 110s
kube-system metrics-server-6867b74b74-nbp4v 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (9%!)(MISSING) 0 (0%!)(MISSING) 95s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 103s
kubernetes-dashboard dashboard-metrics-scraper-7c96f5b85b-98qnz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 40s
kubernetes-dashboard kubernetes-dashboard-695b96c756-9b4fb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 40s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING)
memory 370Mi (17%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 42s kube-proxy
Normal Starting 102s kube-proxy
Normal Starting 110s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 110s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 110s kubelet Node newest-cni-006868 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 110s kubelet Node newest-cni-006868 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 110s kubelet Node newest-cni-006868 status is now: NodeHasSufficientPID
Normal NodeReady 109s kubelet Node newest-cni-006868 status is now: NodeReady
Normal RegisteredNode 106s node-controller Node newest-cni-006868 event: Registered Node newest-cni-006868 in Controller
Normal NodeHasSufficientMemory 48s (x8 over 48s) kubelet Node newest-cni-006868 status is now: NodeHasSufficientMemory
Normal Starting 48s kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 48s (x8 over 48s) kubelet Node newest-cni-006868 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 48s (x7 over 48s) kubelet Node newest-cni-006868 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 48s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 41s node-controller Node newest-cni-006868 event: Registered Node newest-cni-006868 in Controller
Normal Starting 5s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5s kubelet Node newest-cni-006868 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5s kubelet Node newest-cni-006868 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5s kubelet Node newest-cni-006868 status is now: NodeHasSufficientPID
==> dmesg <==
[ +1.994774] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +2.348964] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +6.628880] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
[ +0.056698] kauditd_printk_skb: 1 callbacks suppressed
[ +0.060771] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
[ +2.144834] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
[ +0.344064] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
[ +0.139301] systemd-fstab-generator[821]: Ignoring "noauto" option for root device
[ +0.155881] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
[ +2.272763] kauditd_printk_skb: 195 callbacks suppressed
[ +0.317386] systemd-fstab-generator[1064]: Ignoring "noauto" option for root device
[ +0.120781] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
[ +0.121553] systemd-fstab-generator[1088]: Ignoring "noauto" option for root device
[ +0.169429] systemd-fstab-generator[1103]: Ignoring "noauto" option for root device
[ +0.490098] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
[ +1.771374] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
[ +4.769059] kauditd_printk_skb: 244 callbacks suppressed
[ +1.452513] systemd-fstab-generator[2048]: Ignoring "noauto" option for root device
[ +3.412008] systemd-fstab-generator[2312]: Ignoring "noauto" option for root device
[ +0.140967] kauditd_printk_skb: 115 callbacks suppressed
[ +0.594224] systemd-fstab-generator[2487]: Ignoring "noauto" option for root device
[Aug 5 18:32] kauditd_printk_skb: 27 callbacks suppressed
[ +0.131237] systemd-fstab-generator[2735]: Ignoring "noauto" option for root device
==> etcd [067a823d9b94] <==
{"level":"info","ts":"2024-08-05T18:30:32.238436Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T18:30:32.243753Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:newest-cni-006868 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-05T18:30:32.243940Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T18:30:32.246798Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T18:30:32.254414Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-05T18:30:32.254482Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-05T18:30:32.254841Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T18:30:32.256032Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T18:30:32.256085Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T18:30:32.261530Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:30:32.266221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-05T18:30:32.275835Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:30:32.277306Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
{"level":"info","ts":"2024-08-05T18:30:47.170034Z","caller":"traceutil/trace.go:171","msg":"trace[41647601] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"114.016854ms","start":"2024-08-05T18:30:47.054738Z","end":"2024-08-05T18:30:47.168754Z","steps":["trace[41647601] 'process raft request' (duration: 113.668914ms)"],"step_count":1}
{"level":"info","ts":"2024-08-05T18:30:48.689648Z","caller":"traceutil/trace.go:171","msg":"trace[1645074859] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"116.176308ms","start":"2024-08-05T18:30:48.573455Z","end":"2024-08-05T18:30:48.689632Z","steps":["trace[1645074859] 'process raft request' (duration: 115.860705ms)"],"step_count":1}
{"level":"info","ts":"2024-08-05T18:30:51.711605Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2024-08-05T18:30:51.712165Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"newest-cni-006868","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
{"level":"warn","ts":"2024-08-05T18:30:51.723044Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-05T18:30:51.723714Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-05T18:30:51.804512Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
{"level":"warn","ts":"2024-08-05T18:30:51.804581Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
{"level":"info","ts":"2024-08-05T18:30:51.806942Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"10fb7b0a157fc334","current-leader-member-id":"10fb7b0a157fc334"}
{"level":"info","ts":"2024-08-05T18:30:51.819534Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.154:2380"}
{"level":"info","ts":"2024-08-05T18:30:51.820006Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.154:2380"}
{"level":"info","ts":"2024-08-05T18:30:51.820029Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"newest-cni-006868","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
==> etcd [22c23d2af5ee] <==
{"level":"info","ts":"2024-08-05T18:31:40.074575Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T18:31:40.074666Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T18:31:40.074693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T18:31:40.070402Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:31:40.078820Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-08-05T18:31:40.079024Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"10fb7b0a157fc334","initial-advertise-peer-urls":["https://192.168.39.154:2380"],"listen-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-08-05T18:31:40.079045Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-08-05T18:31:40.079132Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.154:2380"}
{"level":"info","ts":"2024-08-05T18:31:40.079140Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.154:2380"}
{"level":"info","ts":"2024-08-05T18:31:40.701389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 is starting a new election at term 2"}
{"level":"info","ts":"2024-08-05T18:31:40.701441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became pre-candidate at term 2"}
{"level":"info","ts":"2024-08-05T18:31:40.701469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgPreVoteResp from 10fb7b0a157fc334 at term 2"}
{"level":"info","ts":"2024-08-05T18:31:40.701680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became candidate at term 3"}
{"level":"info","ts":"2024-08-05T18:31:40.701907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgVoteResp from 10fb7b0a157fc334 at term 3"}
{"level":"info","ts":"2024-08-05T18:31:40.702116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became leader at term 3"}
{"level":"info","ts":"2024-08-05T18:31:40.702159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 10fb7b0a157fc334 elected leader 10fb7b0a157fc334 at term 3"}
{"level":"info","ts":"2024-08-05T18:31:40.709437Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:newest-cni-006868 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-05T18:31:40.709878Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T18:31:40.711343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T18:31:40.722822Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:31:40.727478Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2024-08-05T18:31:40.733729Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-05T18:31:40.733767Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-05T18:31:40.743490Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2024-08-05T18:31:40.770023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
==> kernel <==
18:32:26 up 1 min, 0 users, load average: 1.22, 0.32, 0.11
Linux newest-cni-006868 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [17fd99f38c2d] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0805 18:31:42.668066 1 cache.go:39] Caches are synced for autoregister controller
E0805 18:31:42.688241 1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
I0805 18:31:43.409684 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0805 18:31:43.643016 1 handler_proxy.go:99] no RequestInfo found in the context
E0805 18:31:43.643328 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0805 18:31:43.643507 1 handler_proxy.go:99] no RequestInfo found in the context
E0805 18:31:43.643651 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0805 18:31:43.644742 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0805 18:31:43.644923 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0805 18:31:44.252790 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0805 18:31:44.265190 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0805 18:31:44.309813 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0805 18:31:44.343548 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0805 18:31:44.353696 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0805 18:31:46.266339 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0805 18:31:46.306975 1 controller.go:615] quota admission added evaluator for: endpoints
I0805 18:31:46.740167 1 controller.go:615] quota admission added evaluator for: namespaces
I0805 18:31:46.812115 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0805 18:31:47.152316 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.12.22"}
I0805 18:31:47.176437 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.231.199"}
==> kube-apiserver [e572d9a1938b] <==
W0805 18:31:01.075580 1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.084950 1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.085045 1 logging.go:55] [core] [Channel #15 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.156038 1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.182245 1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.183528 1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.209097 1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.327784 1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.356732 1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.377249 1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.416117 1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.429197 1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.475916 1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.479447 1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.507049 1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.516788 1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.597445 1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.609387 1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.671732 1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.673076 1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.673405 1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.677756 1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.686395 1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.764038 1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0805 18:31:01.820273 1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
==> kube-controller-manager [240e0b6feefa] <==
E0805 18:31:46.925076 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0805 18:31:46.945102 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.742578ms"
E0805 18:31:46.945171 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0805 18:31:46.945324 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="31.535105ms"
E0805 18:31:46.945339 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b\" failed with pods \"dashboard-metrics-scraper-7c96f5b85b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
I0805 18:31:47.006119 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="58.418274ms"
I0805 18:31:47.029147 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="78.28612ms"
I0805 18:31:47.067370 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="37.818476ms"
I0805 18:31:47.104788 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="98.626819ms"
I0805 18:31:47.121483 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="53.854498ms"
I0805 18:31:47.121731 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="108.31µs"
I0805 18:31:47.131587 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="26.676796ms"
I0805 18:31:47.131940 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="166.814µs"
I0805 18:31:48.041229 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="75.285µs"
I0805 18:32:20.675490 1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
E0805 18:32:20.775175 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0805 18:32:20.781815 1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
I0805 18:32:21.471236 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="newest-cni-006868"
I0805 18:32:22.350164 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="119.715µs"
I0805 18:32:22.394168 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="184.907µs"
I0805 18:32:23.511822 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="77.824µs"
I0805 18:32:23.519431 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="119.488µs"
I0805 18:32:24.556440 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="64.299µs"
I0805 18:32:24.602252 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="107.371µs"
I0805 18:32:25.636961 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="77.871µs"
==> kube-controller-manager [76015da0e4b2] <==
I0805 18:30:40.945240 1 shared_informer.go:320] Caches are synced for disruption
I0805 18:30:40.951946 1 shared_informer.go:320] Caches are synced for endpoint
I0805 18:30:40.997020 1 shared_informer.go:320] Caches are synced for attach detach
I0805 18:30:40.997404 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0805 18:30:41.062245 1 shared_informer.go:320] Caches are synced for resource quota
I0805 18:30:41.067922 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="newest-cni-006868"
I0805 18:30:41.073917 1 shared_informer.go:320] Caches are synced for resource quota
I0805 18:30:41.512097 1 shared_informer.go:320] Caches are synced for garbage collector
I0805 18:30:41.552772 1 shared_informer.go:320] Caches are synced for garbage collector
I0805 18:30:41.552804 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0805 18:30:41.673719 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="newest-cni-006868"
I0805 18:30:42.055293 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="332.702089ms"
I0805 18:30:42.074793 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="19.394848ms"
I0805 18:30:42.110104 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="35.257114ms"
I0805 18:30:42.111229 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="162.595µs"
I0805 18:30:42.723826 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.994049ms"
I0805 18:30:42.739123 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="15.219438ms"
I0805 18:30:42.742328 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="72.086µs"
I0805 18:30:44.207109 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="91.446µs"
I0805 18:30:44.262155 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="73.11µs"
I0805 18:30:47.272673 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="newest-cni-006868"
I0805 18:30:51.106254 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="51.193829ms"
I0805 18:30:51.129411 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="20.338118ms"
I0805 18:30:51.132407 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="195.585µs"
I0805 18:30:51.151035 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="93.479µs"
==> kube-proxy [034b8846cf12] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0805 18:30:43.116342 1 proxier.go:734] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0805 18:30:43.146865 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
E0805 18:30:43.146945 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0805 18:30:43.214754 1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
I0805 18:30:43.214803 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0805 18:30:43.214831 1 server_linux.go:169] "Using iptables Proxier"
I0805 18:30:43.217168 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0805 18:30:43.217516 1 server.go:483] "Version info" version="v1.31.0-rc.0"
I0805 18:30:43.217548 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0805 18:30:43.220246 1 config.go:197] "Starting service config controller"
I0805 18:30:43.220289 1 shared_informer.go:313] Waiting for caches to sync for service config
I0805 18:30:43.220314 1 config.go:104] "Starting endpoint slice config controller"
I0805 18:30:43.220330 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0805 18:30:43.220793 1 config.go:326] "Starting node config controller"
I0805 18:30:43.220819 1 shared_informer.go:313] Waiting for caches to sync for node config
I0805 18:30:43.322698 1 shared_informer.go:320] Caches are synced for node config
I0805 18:30:43.322740 1 shared_informer.go:320] Caches are synced for service config
I0805 18:30:43.322848 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-proxy [4e96aea33e5f] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0805 18:31:43.684994 1 proxier.go:734] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0805 18:31:43.704492 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
E0805 18:31:43.704787 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0805 18:31:43.745220 1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
I0805 18:31:43.745319 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0805 18:31:43.745387 1 server_linux.go:169] "Using iptables Proxier"
I0805 18:31:43.748618 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0805 18:31:43.749596 1 server.go:483] "Version info" version="v1.31.0-rc.0"
I0805 18:31:43.749645 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0805 18:31:43.755467 1 config.go:326] "Starting node config controller"
I0805 18:31:43.755582 1 shared_informer.go:313] Waiting for caches to sync for node config
I0805 18:31:43.755962 1 config.go:197] "Starting service config controller"
I0805 18:31:43.756098 1 shared_informer.go:313] Waiting for caches to sync for service config
I0805 18:31:43.756154 1 config.go:104] "Starting endpoint slice config controller"
I0805 18:31:43.756225 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0805 18:31:43.855882 1 shared_informer.go:320] Caches are synced for node config
I0805 18:31:43.857000 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0805 18:31:43.857094 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [610674a9184a] <==
W0805 18:31:40.015321 1 feature_gate.go:354] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
W0805 18:31:40.016173 1 feature_gate.go:354] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
I0805 18:31:40.730522 1 serving.go:386] Generated self-signed cert in-memory
W0805 18:31:42.454748 1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0805 18:31:42.454802 1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0805 18:31:42.454813 1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
W0805 18:31:42.454822 1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0805 18:31:42.609530 1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
I0805 18:31:42.609577 1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0805 18:31:42.629598 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0805 18:31:42.629824 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I0805 18:31:42.632243 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0805 18:31:42.633445 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 18:31:42.734203 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [ada8531e09d7] <==
W0805 18:30:35.248716 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0805 18:30:35.248782 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.250286 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0805 18:30:35.250341 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.303524 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0805 18:30:35.303589 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.367492 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0805 18:30:35.369517 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.388498 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0805 18:30:35.388837 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0805 18:30:35.410003 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0805 18:30:35.410361 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.446941 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0805 18:30:35.447251 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.486821 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0805 18:30:35.488405 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.553386 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0805 18:30:35.553770 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0805 18:30:35.571792 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0805 18:30:35.571860 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0805 18:30:37.468628 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 18:30:51.802642 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I0805 18:30:51.802765 1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
I0805 18:30:51.803959 1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0805 18:30:51.809039 1 run.go:72] "command failed" err="finished without leader elect"
==> kubelet <==
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181350 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7569998c-3a39-42a8-ab1d-e146b5179424-lib-modules\") pod \"kube-proxy-xqx9t\" (UID: \"7569998c-3a39-42a8-ab1d-e146b5179424\") " pod="kube-system/kube-proxy-xqx9t"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181451 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f8983c9e-ebbc-44da-bccc-cee486a01c95-tmp\") pod \"storage-provisioner\" (UID: \"f8983c9e-ebbc-44da-bccc-cee486a01c95\") " pod="kube-system/storage-provisioner"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181522 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wr6\" (UniqueName: \"kubernetes.io/projected/046c860c-41bb-4461-877e-7193f53258f3-kube-api-access-74wr6\") pod \"kubernetes-dashboard-695b96c756-9b4fb\" (UID: \"046c860c-41bb-4461-877e-7193f53258f3\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-9b4fb"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181557 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d5bc\" (UniqueName: \"kubernetes.io/projected/17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b-kube-api-access-2d5bc\") pod \"dashboard-metrics-scraper-7c96f5b85b-98qnz\" (UID: \"17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-98qnz"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.181628 2742 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7569998c-3a39-42a8-ab1d-e146b5179424-xtables-lock\") pod \"kube-proxy-xqx9t\" (UID: \"7569998c-3a39-42a8-ab1d-e146b5179424\") " pod="kube-system/kube-proxy-xqx9t"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.282122 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvhqj\" (UniqueName: \"kubernetes.io/projected/864943b9-5315-452b-a31a-85db981929ed-kube-api-access-fvhqj\") pod \"864943b9-5315-452b-a31a-85db981929ed\" (UID: \"864943b9-5315-452b-a31a-85db981929ed\") "
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.282212 2742 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/864943b9-5315-452b-a31a-85db981929ed-config-volume\") pod \"864943b9-5315-452b-a31a-85db981929ed\" (UID: \"864943b9-5315-452b-a31a-85db981929ed\") "
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.283800 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864943b9-5315-452b-a31a-85db981929ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "864943b9-5315-452b-a31a-85db981929ed" (UID: "864943b9-5315-452b-a31a-85db981929ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.288017 2742 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864943b9-5315-452b-a31a-85db981929ed-kube-api-access-fvhqj" (OuterVolumeSpecName: "kube-api-access-fvhqj") pod "864943b9-5315-452b-a31a-85db981929ed" (UID: "864943b9-5315-452b-a31a-85db981929ed"). InnerVolumeSpecName "kube-api-access-fvhqj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.306752 2742 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.383355 2742 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/864943b9-5315-452b-a31a-85db981929ed-config-volume\") on node \"newest-cni-006868\" DevicePath \"\""
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.383433 2742 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fvhqj\" (UniqueName: \"kubernetes.io/projected/864943b9-5315-452b-a31a-85db981929ed-kube-api-access-fvhqj\") on node \"newest-cni-006868\" DevicePath \"\""
Aug 05 18:32:22 newest-cni-006868 kubelet[2742]: I0805 18:32:22.418807 2742 scope.go:117] "RemoveContainer" containerID="31b584e307bce12b7f3379ec6ac16f8b6d6c6252c94a3f4120b4b6999613ffca"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: I0805 18:32:23.418720 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e582e7bc69e8c7c0a8222f2d32b06a51d1eb6d61397834c824ae362f5c52301f"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: I0805 18:32:23.483764 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2006920987ae72bb5ba3e59bef4f6b30b410d9e619584f9e63ecce8317a134a4"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: E0805 18:32:23.505014 2742 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-newest-cni-006868\" already exists" pod="kube-system/kube-controller-manager-newest-cni-006868"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: E0805 18:32:23.508551 2742 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-newest-cni-006868\" already exists" pod="kube-system/etcd-newest-cni-006868"
Aug 05 18:32:23 newest-cni-006868 kubelet[2742]: E0805 18:32:23.511900 2742 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-newest-cni-006868\" already exists" pod="kube-system/kube-scheduler-newest-cni-006868"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.033339 2742 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.033435 2742 kuberuntime_image.go:55] "Failed to pull image" err="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.034000 2742 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2d5bc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,Peri
odSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-7c96f5b85b-98qnz_kubernetes-dashboard(17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b): ErrImagePull: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upg
rade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" logger="UnhandledError"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.035234 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-98qnz" podUID="17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b"
Aug 05 18:32:24 newest-cni-006868 kubelet[2742]: E0805 18:32:24.534430 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-98qnz" podUID="17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b"
Aug 05 18:32:25 newest-cni-006868 kubelet[2742]: I0805 18:32:25.220222 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864943b9-5315-452b-a31a-85db981929ed" path="/var/lib/kubelet/pods/864943b9-5315-452b-a31a-85db981929ed/volumes"
Aug 05 18:32:25 newest-cni-006868 kubelet[2742]: E0805 18:32:25.631034 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-98qnz" podUID="17b6c80a-ff26-4d05-9e0b-7d3ceef73c4b"
==> storage-provisioner [31b584e307bc] <==
I0805 18:31:43.406482 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0805 18:32:20.635481 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
==> storage-provisioner [4f78f2304a62] <==
I0805 18:32:22.788182 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0805 18:32:22.830507 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0805 18:32:22.831349 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-006868 -n newest-cni-006868
helpers_test.go:261: (dbg) Run: kubectl --context newest-cni-006868 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-nbp4v dashboard-metrics-scraper-7c96f5b85b-98qnz kubernetes-dashboard-695b96c756-9b4fb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context newest-cni-006868 describe pod metrics-server-6867b74b74-nbp4v dashboard-metrics-scraper-7c96f5b85b-98qnz kubernetes-dashboard-695b96c756-9b4fb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context newest-cni-006868 describe pod metrics-server-6867b74b74-nbp4v dashboard-metrics-scraper-7c96f5b85b-98qnz kubernetes-dashboard-695b96c756-9b4fb: exit status 1 (87.572311ms)
** stderr **
Error from server (NotFound): pods "metrics-server-6867b74b74-nbp4v" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-7c96f5b85b-98qnz" not found
Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-9b4fb" not found
** /stderr **
helpers_test.go:279: kubectl --context newest-cni-006868 describe pod metrics-server-6867b74b74-nbp4v dashboard-metrics-scraper-7c96f5b85b-98qnz kubernetes-dashboard-695b96c756-9b4fb: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (39.81s)