=== RUN TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim
=== CONT TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [33a98b3e-aef3-4edc-8e99-b9ab8f1c70de] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004281889s
functional_test_pvc_test.go:49: (dbg) Run: kubectl --context functional-644345 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run: kubectl --context functional-644345 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run: kubectl --context functional-644345 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run: kubectl --context functional-644345 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7916faf8-6ac9-46d3-aed5-006a182fd8d7] Pending
helpers_test.go:344: "sp-pod" [7916faf8-6ac9-46d3-aed5-006a182fd8d7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0805 11:58:09.038913 2795233 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/addons-245337/client.crt: no such file or directory
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644345 -n functional-644345
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-08-05 11:59:58.200618163 +0000 UTC m=+830.431128235
functional_test_pvc_test.go:130: (dbg) Run: kubectl --context functional-644345 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-644345 describe po sp-pod -n default:
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-644345/192.168.49.2
Start Time: Mon, 05 Aug 2024 11:56:57 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4js6j (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-4js6j:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m default-scheduler Successfully assigned default/sp-pod to functional-644345
Warning Failed 3m kubelet Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal Pulling 88s (x4 over 3m) kubelet Pulling image "docker.io/nginx"
Warning Failed 88s (x4 over 3m) kubelet Error: ErrImagePull
Warning Failed 88s (x3 over 2m45s) kubelet Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 76s (x6 over 2m59s) kubelet Error: ImagePullBackOff
Normal BackOff 64s (x7 over 2m59s) kubelet Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run: kubectl --context functional-644345 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-644345 logs sp-pod -n default: exit status 1 (125.60967ms)
** stderr **
Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-644345 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect functional-644345
helpers_test.go:235: (dbg) docker inspect functional-644345:
-- stdout --
[
{
"Id": "475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb",
"Created": "2024-08-05T11:53:53.118135573Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2820007,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-08-05T11:53:53.258508129Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
"ResolvConfPath": "/var/lib/docker/containers/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb/hostname",
"HostsPath": "/var/lib/docker/containers/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb/hosts",
"LogPath": "/var/lib/docker/containers/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb/475a2c39b082e93db462072ea57b0e0cd96e8284cc1baab6646563089a2181bb-json.log",
"Name": "/functional-644345",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-644345:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-644345",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4194304000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8388608000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/a1c5159b684b33d00a6b109eb4e2964b34afa75821f0725dfced083ff6012ecd-init/diff:/var/lib/docker/overlay2/22b51aa5a32d3ad801f10227709a4130eadbc6472f8f1192dd08ba018deb2e68/diff",
"MergedDir": "/var/lib/docker/overlay2/a1c5159b684b33d00a6b109eb4e2964b34afa75821f0725dfced083ff6012ecd/merged",
"UpperDir": "/var/lib/docker/overlay2/a1c5159b684b33d00a6b109eb4e2964b34afa75821f0725dfced083ff6012ecd/diff",
"WorkDir": "/var/lib/docker/overlay2/a1c5159b684b33d00a6b109eb4e2964b34afa75821f0725dfced083ff6012ecd/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-644345",
"Source": "/var/lib/docker/volumes/functional-644345/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-644345",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-644345",
"name.minikube.sigs.k8s.io": "functional-644345",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "dc65586b10d0f70dfd2718dbb1cfc1f2ec025c77d52c2cbcb3848a37c8ce2366",
"SandboxKey": "/var/run/docker/netns/dc65586b10d0",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36443"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36444"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36447"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36445"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "36446"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-644345": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null,
"NetworkID": "62113f275f1e97d8db3e2ecd142547662995719e9e430ce91fe8fe4bc20bbc49",
"EndpointID": "f58b18697acd7a3f57f874d9a0960723ae9618211ecf4b36f64b456422dbba5d",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-644345",
"475a2c39b082"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-644345 -n functional-644345
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p functional-644345 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-644345 logs -n 25: (1.203582522s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
| ssh | functional-644345 ssh | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | |
| | sudo crictl inspecti | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| cache | functional-644345 cache reload | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
| ssh | functional-644345 ssh | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
| | sudo crictl inspecti | | | | | |
| | registry.k8s.io/pause:latest | | | | | |
| cache | delete | minikube | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
| | registry.k8s.io/pause:3.1 | | | | | |
| cache | delete | minikube | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
| | registry.k8s.io/pause:latest | | | | | |
| kubectl | functional-644345 kubectl -- | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:55 UTC |
| | --context functional-644345 | | | | | |
| | get pods | | | | | |
| start | -p functional-644345 | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:55 UTC | 05 Aug 24 11:56 UTC |
| | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision | | | | | |
| | --wait=all | | | | | |
| service | invalid-svc -p | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | |
| | functional-644345 | | | | | |
| config | functional-644345 config unset | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | cpus | | | | | |
| cp | functional-644345 cp | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | testdata/cp-test.txt | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-644345 config get | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | |
| | cpus | | | | | |
| config | functional-644345 config set | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | cpus 2 | | | | | |
| config | functional-644345 config get | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | cpus | | | | | |
| config | functional-644345 config unset | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | cpus | | | | | |
| ssh | functional-644345 ssh -n | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | functional-644345 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| config | functional-644345 config get | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | |
| | cpus | | | | | |
| ssh | functional-644345 ssh echo | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | hello | | | | | |
| cp | functional-644345 cp | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | functional-644345:/home/docker/cp-test.txt | | | | | |
| | /tmp/TestFunctionalparallelCpCmd3952485624/001/cp-test.txt | | | | | |
| ssh | functional-644345 ssh cat | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | /etc/hostname | | | | | |
| ssh | functional-644345 ssh -n | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | functional-644345 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| tunnel | functional-644345 tunnel | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | |
| | --alsologtostderr | | | | | |
| tunnel | functional-644345 tunnel | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | |
| | --alsologtostderr | | | | | |
| cp | functional-644345 cp | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | testdata/cp-test.txt | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| ssh | functional-644345 ssh -n | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | 05 Aug 24 11:56 UTC |
| | functional-644345 sudo cat | | | | | |
| | /tmp/does/not/exist/cp-test.txt | | | | | |
| tunnel | functional-644345 tunnel | functional-644345 | jenkins | v1.33.1 | 05 Aug 24 11:56 UTC | |
| | --alsologtostderr | | | | | |
|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/08/05 11:55:57
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0805 11:55:57.693945 2827274 out.go:291] Setting OutFile to fd 1 ...
I0805 11:55:57.694061 2827274 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:55:57.694065 2827274 out.go:304] Setting ErrFile to fd 2...
I0805 11:55:57.694069 2827274 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 11:55:57.694314 2827274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19377-2789855/.minikube/bin
I0805 11:55:57.694659 2827274 out.go:298] Setting JSON to false
I0805 11:55:57.695636 2827274 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":70709,"bootTime":1722788249,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0805 11:55:57.695694 2827274 start.go:139] virtualization:
I0805 11:55:57.698948 2827274 out.go:177] * [functional-644345] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0805 11:55:57.702371 2827274 out.go:177] - MINIKUBE_LOCATION=19377
I0805 11:55:57.702459 2827274 notify.go:220] Checking for updates...
I0805 11:55:57.708295 2827274 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0805 11:55:57.711069 2827274 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19377-2789855/kubeconfig
I0805 11:55:57.713672 2827274 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19377-2789855/.minikube
I0805 11:55:57.716312 2827274 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0805 11:55:57.718920 2827274 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0805 11:55:57.722131 2827274 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 11:55:57.722225 2827274 driver.go:392] Setting default libvirt URI to qemu:///system
I0805 11:55:57.743733 2827274 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
I0805 11:55:57.743855 2827274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0805 11:55:57.813952 2827274 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:64 SystemTime:2024-08-05 11:55:57.799042243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
I0805 11:55:57.814058 2827274 docker.go:307] overlay module found
I0805 11:55:57.817137 2827274 out.go:177] * Using the docker driver based on existing profile
I0805 11:55:57.819795 2827274 start.go:297] selected driver: docker
I0805 11:55:57.819804 2827274 start.go:901] validating driver "docker" against &{Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 11:55:57.819922 2827274 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0805 11:55:57.820024 2827274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0805 11:55:57.898315 2827274 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:64 SystemTime:2024-08-05 11:55:57.888779338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
I0805 11:55:57.898728 2827274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0805 11:55:57.898749 2827274 cni.go:84] Creating CNI manager for ""
I0805 11:55:57.898761 2827274 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 11:55:57.898818 2827274 start.go:340] cluster config:
{Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 11:55:57.901665 2827274 out.go:177] * Starting "functional-644345" primary control-plane node in "functional-644345" cluster
I0805 11:55:57.904382 2827274 cache.go:121] Beginning downloading kic base image for docker with docker
I0805 11:55:57.907294 2827274 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
I0805 11:55:57.909904 2827274 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 11:55:57.909960 2827274 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0805 11:55:57.909968 2827274 cache.go:56] Caching tarball of preloaded images
I0805 11:55:57.910124 2827274 preload.go:172] Found /home/jenkins/minikube-integration/19377-2789855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0805 11:55:57.910133 2827274 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0805 11:55:57.910223 2827274 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
I0805 11:55:57.911670 2827274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/config.json ...
W0805 11:55:57.927189 2827274 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
I0805 11:55:57.927199 2827274 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
I0805 11:55:57.927286 2827274 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
I0805 11:55:57.927305 2827274 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
I0805 11:55:57.927308 2827274 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
I0805 11:55:57.927316 2827274 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
I0805 11:55:57.927320 2827274 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
I0805 11:55:58.058111 2827274 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
I0805 11:55:58.058137 2827274 cache.go:194] Successfully downloaded all kic artifacts
I0805 11:55:58.058200 2827274 start.go:360] acquireMachinesLock for functional-644345: {Name:mkc50feaac78d4e648167b3dd0f9a2f0d677d151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 11:55:58.058298 2827274 start.go:364] duration metric: took 71.162µs to acquireMachinesLock for "functional-644345"
I0805 11:55:58.058320 2827274 start.go:96] Skipping create...Using existing machine configuration
I0805 11:55:58.058325 2827274 fix.go:54] fixHost starting:
I0805 11:55:58.058874 2827274 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 11:55:58.079072 2827274 fix.go:112] recreateIfNeeded on functional-644345: state=Running err=<nil>
W0805 11:55:58.079091 2827274 fix.go:138] unexpected machine state, will restart: <nil>
I0805 11:55:58.083732 2827274 out.go:177] * Updating the running docker "functional-644345" container ...
I0805 11:55:58.086275 2827274 machine.go:94] provisionDockerMachine start ...
I0805 11:55:58.086382 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:58.103172 2827274 main.go:141] libmachine: Using SSH client type: native
I0805 11:55:58.103428 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 36443 <nil> <nil>}
I0805 11:55:58.103435 2827274 main.go:141] libmachine: About to run SSH command:
hostname
I0805 11:55:58.235982 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-644345
I0805 11:55:58.235997 2827274 ubuntu.go:169] provisioning hostname "functional-644345"
I0805 11:55:58.236063 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:58.255866 2827274 main.go:141] libmachine: Using SSH client type: native
I0805 11:55:58.256122 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 36443 <nil> <nil>}
I0805 11:55:58.256131 2827274 main.go:141] libmachine: About to run SSH command:
sudo hostname functional-644345 && echo "functional-644345" | sudo tee /etc/hostname
I0805 11:55:58.400851 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-644345
I0805 11:55:58.400921 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:58.419302 2827274 main.go:141] libmachine: Using SSH client type: native
I0805 11:55:58.419541 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 36443 <nil> <nil>}
I0805 11:55:58.419556 2827274 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-644345' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-644345/g' /etc/hosts;
else
echo '127.0.1.1 functional-644345' | sudo tee -a /etc/hosts;
fi
fi
I0805 11:55:58.552493 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0805 11:55:58.552508 2827274 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19377-2789855/.minikube CaCertPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19377-2789855/.minikube}
I0805 11:55:58.552530 2827274 ubuntu.go:177] setting up certificates
I0805 11:55:58.552540 2827274 provision.go:84] configureAuth start
I0805 11:55:58.552602 2827274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644345
I0805 11:55:58.569637 2827274 provision.go:143] copyHostCerts
I0805 11:55:58.569705 2827274 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-2789855/.minikube/cert.pem, removing ...
I0805 11:55:58.569724 2827274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-2789855/.minikube/cert.pem
I0805 11:55:58.569799 2827274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19377-2789855/.minikube/cert.pem (1123 bytes)
I0805 11:55:58.569906 2827274 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-2789855/.minikube/key.pem, removing ...
I0805 11:55:58.569910 2827274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-2789855/.minikube/key.pem
I0805 11:55:58.569934 2827274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19377-2789855/.minikube/key.pem (1679 bytes)
I0805 11:55:58.569986 2827274 exec_runner.go:144] found /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.pem, removing ...
I0805 11:55:58.569989 2827274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.pem
I0805 11:55:58.570011 2827274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.pem (1078 bytes)
I0805 11:55:58.570055 2827274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca-key.pem org=jenkins.functional-644345 san=[127.0.0.1 192.168.49.2 functional-644345 localhost minikube]
I0805 11:55:59.016930 2827274 provision.go:177] copyRemoteCerts
I0805 11:55:59.016984 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0805 11:55:59.017030 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:59.036457 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 11:55:59.134480 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0805 11:55:59.159943 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0805 11:55:59.184695 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0805 11:55:59.210418 2827274 provision.go:87] duration metric: took 657.865412ms to configureAuth
I0805 11:55:59.210436 2827274 ubuntu.go:193] setting minikube options for container-runtime
I0805 11:55:59.210628 2827274 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 11:55:59.210685 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:59.227841 2827274 main.go:141] libmachine: Using SSH client type: native
I0805 11:55:59.228093 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 36443 <nil> <nil>}
I0805 11:55:59.228101 2827274 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0805 11:55:59.361086 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0805 11:55:59.361098 2827274 ubuntu.go:71] root file system type: overlay
I0805 11:55:59.361215 2827274 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0805 11:55:59.361280 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:59.378934 2827274 main.go:141] libmachine: Using SSH client type: native
I0805 11:55:59.379177 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 36443 <nil> <nil>}
I0805 11:55:59.379251 2827274 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0805 11:55:59.529054 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0805 11:55:59.529140 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:59.547316 2827274 main.go:141] libmachine: Using SSH client type: native
I0805 11:55:59.547550 2827274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 36443 <nil> <nil>}
I0805 11:55:59.547574 2827274 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0805 11:55:59.685959 2827274 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0805 11:55:59.685972 2827274 machine.go:97] duration metric: took 1.59968455s to provisionDockerMachine
I0805 11:55:59.685982 2827274 start.go:293] postStartSetup for "functional-644345" (driver="docker")
I0805 11:55:59.685993 2827274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0805 11:55:59.686055 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0805 11:55:59.686093 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:59.704191 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 11:55:59.807892 2827274 ssh_runner.go:195] Run: cat /etc/os-release
I0805 11:55:59.812795 2827274 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0805 11:55:59.812820 2827274 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0805 11:55:59.812828 2827274 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0805 11:55:59.812834 2827274 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0805 11:55:59.812844 2827274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-2789855/.minikube/addons for local assets ...
I0805 11:55:59.812897 2827274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19377-2789855/.minikube/files for local assets ...
I0805 11:55:59.812972 2827274 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/ssl/certs/27952332.pem -> 27952332.pem in /etc/ssl/certs
I0805 11:55:59.813050 2827274 filesync.go:149] local asset: /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/test/nested/copy/2795233/hosts -> hosts in /etc/test/nested/copy/2795233
I0805 11:55:59.813102 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2795233
I0805 11:55:59.822305 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/ssl/certs/27952332.pem --> /etc/ssl/certs/27952332.pem (1708 bytes)
I0805 11:55:59.847602 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/test/nested/copy/2795233/hosts --> /etc/test/nested/copy/2795233/hosts (40 bytes)
I0805 11:55:59.877861 2827274 start.go:296] duration metric: took 191.864827ms for postStartSetup
I0805 11:55:59.877952 2827274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0805 11:55:59.877991 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:55:59.904684 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 11:55:59.997168 2827274 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0805 11:56:00.002333 2827274 fix.go:56] duration metric: took 1.943996156s for fixHost
I0805 11:56:00.002354 2827274 start.go:83] releasing machines lock for "functional-644345", held for 1.944046814s
I0805 11:56:00.002458 2827274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-644345
I0805 11:56:00.117806 2827274 ssh_runner.go:195] Run: cat /version.json
I0805 11:56:00.117868 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:56:00.119868 2827274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0805 11:56:00.119986 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:56:00.174273 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 11:56:00.182212 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 11:56:00.631510 2827274 ssh_runner.go:195] Run: systemctl --version
I0805 11:56:00.636666 2827274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0805 11:56:00.642299 2827274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0805 11:56:00.667210 2827274 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0805 11:56:00.667286 2827274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0805 11:56:00.679742 2827274 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0805 11:56:00.679760 2827274 start.go:495] detecting cgroup driver to use...
I0805 11:56:00.679805 2827274 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0805 11:56:00.679911 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 11:56:00.701503 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0805 11:56:00.713210 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0805 11:56:00.725523 2827274 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0805 11:56:00.725588 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0805 11:56:00.737345 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 11:56:00.749105 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0805 11:56:00.760085 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0805 11:56:00.771939 2827274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0805 11:56:00.781764 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0805 11:56:00.792732 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0805 11:56:00.811747 2827274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0805 11:56:00.822691 2827274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0805 11:56:00.831947 2827274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0805 11:56:00.841135 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 11:56:00.958643 2827274 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0805 11:56:11.340760 2827274 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.382093286s)
I0805 11:56:11.340777 2827274 start.go:495] detecting cgroup driver to use...
I0805 11:56:11.340811 2827274 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0805 11:56:11.340872 2827274 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0805 11:56:11.357072 2827274 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0805 11:56:11.357140 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0805 11:56:11.370510 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0805 11:56:11.387322 2827274 ssh_runner.go:195] Run: which cri-dockerd
I0805 11:56:11.391769 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0805 11:56:11.400977 2827274 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0805 11:56:11.422194 2827274 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0805 11:56:11.521219 2827274 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0805 11:56:11.613068 2827274 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0805 11:56:11.613196 2827274 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0805 11:56:11.635882 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 11:56:11.754545 2827274 ssh_runner.go:195] Run: sudo systemctl restart docker
I0805 11:56:12.302488 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0805 11:56:12.314479 2827274 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0805 11:56:12.332061 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 11:56:12.345380 2827274 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0805 11:56:12.457840 2827274 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0805 11:56:12.556988 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 11:56:12.657719 2827274 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0805 11:56:12.672136 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0805 11:56:12.683869 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 11:56:12.782053 2827274 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0805 11:56:12.872120 2827274 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0805 11:56:12.872191 2827274 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0805 11:56:12.876719 2827274 start.go:563] Will wait 60s for crictl version
I0805 11:56:12.876791 2827274 ssh_runner.go:195] Run: which crictl
I0805 11:56:12.880769 2827274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0805 11:56:12.916994 2827274 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0805 11:56:12.917054 2827274 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 11:56:12.958644 2827274 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0805 11:56:13.008433 2827274 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
I0805 11:56:13.008541 2827274 cli_runner.go:164] Run: docker network inspect functional-644345 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0805 11:56:13.029716 2827274 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0805 11:56:13.037776 2827274 out.go:177] - apiserver.enable-admission-plugins=NamespaceAutoProvision
I0805 11:56:13.039345 2827274 kubeadm.go:883] updating cluster {Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0805 11:56:13.039474 2827274 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 11:56:13.039553 2827274 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 11:56:13.074749 2827274 docker.go:685] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-644345
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/pause:latest
-- /stdout --
I0805 11:56:13.074762 2827274 docker.go:615] Images already preloaded, skipping extraction
I0805 11:56:13.074827 2827274 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0805 11:56:13.107633 2827274 docker.go:685] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-644345
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/pause:latest
-- /stdout --
I0805 11:56:13.107648 2827274 cache_images.go:84] Images are preloaded, skipping loading
I0805 11:56:13.107657 2827274 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.30.3 docker true true} ...
I0805 11:56:13.107779 2827274 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-644345 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0805 11:56:13.107848 2827274 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0805 11:56:13.284601 2827274 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
I0805 11:56:13.284675 2827274 cni.go:84] Creating CNI manager for ""
I0805 11:56:13.284689 2827274 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 11:56:13.284698 2827274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0805 11:56:13.284717 2827274 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-644345 NodeName:functional-644345 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0805 11:56:13.284869 2827274 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "functional-644345"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0805 11:56:13.284937 2827274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
I0805 11:56:13.296363 2827274 binaries.go:44] Found k8s binaries, skipping transfer
I0805 11:56:13.296434 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0805 11:56:13.307140 2827274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
I0805 11:56:13.355913 2827274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0805 11:56:13.410604 2827274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2009 bytes)
I0805 11:56:13.453588 2827274 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0805 11:56:13.458412 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 11:56:13.606858 2827274 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 11:56:13.634796 2827274 certs.go:68] Setting up /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345 for IP: 192.168.49.2
I0805 11:56:13.634808 2827274 certs.go:194] generating shared ca certs ...
I0805 11:56:13.634823 2827274 certs.go:226] acquiring lock for ca certs: {Name:mkf68c149df12db9e13780ffd3b31cf9e53de863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 11:56:13.634957 2827274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.key
I0805 11:56:13.635003 2827274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/proxy-client-ca.key
I0805 11:56:13.635008 2827274 certs.go:256] generating profile certs ...
I0805 11:56:13.635089 2827274 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/client.key
I0805 11:56:13.635133 2827274 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/apiserver.key.f50bff35
I0805 11:56:13.635170 2827274 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/proxy-client.key
I0805 11:56:13.635277 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/2795233.pem (1338 bytes)
W0805 11:56:13.635303 2827274 certs.go:480] ignoring /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/2795233_empty.pem, impossibly tiny 0 bytes
I0805 11:56:13.635311 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca-key.pem (1679 bytes)
I0805 11:56:13.635337 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/ca.pem (1078 bytes)
I0805 11:56:13.635359 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/cert.pem (1123 bytes)
I0805 11:56:13.635380 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/key.pem (1679 bytes)
I0805 11:56:13.635419 2827274 certs.go:484] found cert: /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/ssl/certs/27952332.pem (1708 bytes)
I0805 11:56:13.636023 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0805 11:56:13.679559 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0805 11:56:13.716948 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0805 11:56:13.829266 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0805 11:56:13.926272 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0805 11:56:13.995523 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0805 11:56:14.115937 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0805 11:56:14.176581 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/profiles/functional-644345/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0805 11:56:14.387984 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/certs/2795233.pem --> /usr/share/ca-certificates/2795233.pem (1338 bytes)
I0805 11:56:14.445340 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/files/etc/ssl/certs/27952332.pem --> /usr/share/ca-certificates/27952332.pem (1708 bytes)
I0805 11:56:14.556431 2827274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19377-2789855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0805 11:56:14.690894 2827274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0805 11:56:14.753967 2827274 ssh_runner.go:195] Run: openssl version
I0805 11:56:14.760052 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0805 11:56:14.771030 2827274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0805 11:56:14.776620 2827274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 5 11:47 /usr/share/ca-certificates/minikubeCA.pem
I0805 11:56:14.776687 2827274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0805 11:56:14.802558 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0805 11:56:14.826450 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2795233.pem && ln -fs /usr/share/ca-certificates/2795233.pem /etc/ssl/certs/2795233.pem"
I0805 11:56:14.847324 2827274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2795233.pem
I0805 11:56:14.856806 2827274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 5 11:53 /usr/share/ca-certificates/2795233.pem
I0805 11:56:14.856866 2827274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2795233.pem
I0805 11:56:14.866009 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2795233.pem /etc/ssl/certs/51391683.0"
I0805 11:56:14.886776 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27952332.pem && ln -fs /usr/share/ca-certificates/27952332.pem /etc/ssl/certs/27952332.pem"
I0805 11:56:14.906626 2827274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27952332.pem
I0805 11:56:14.910088 2827274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 5 11:53 /usr/share/ca-certificates/27952332.pem
I0805 11:56:14.910147 2827274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27952332.pem
I0805 11:56:14.925423 2827274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27952332.pem /etc/ssl/certs/3ec20f2e.0"
I0805 11:56:14.951793 2827274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0805 11:56:14.955403 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0805 11:56:14.968958 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0805 11:56:14.979458 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0805 11:56:14.986741 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0805 11:56:14.993966 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0805 11:56:15.001054 2827274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0805 11:56:15.009367 2827274 kubeadm.go:392] StartCluster: {Name:functional-644345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-644345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 11:56:15.009528 2827274 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 11:56:15.029715 2827274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0805 11:56:15.040663 2827274 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0805 11:56:15.040673 2827274 kubeadm.go:593] restartPrimaryControlPlane start ...
I0805 11:56:15.040731 2827274 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0805 11:56:15.051170 2827274 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0805 11:56:15.051789 2827274 kubeconfig.go:125] found "functional-644345" server: "https://192.168.49.2:8441"
I0805 11:56:15.053893 2827274 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0805 11:56:15.069948 2827274 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml 2024-08-05 11:54:03.148970323 +0000
+++ /var/tmp/minikube/kubeadm.yaml.new 2024-08-05 11:56:13.448098338 +0000
@@ -22,7 +22,7 @@
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+ enable-admission-plugins: "NamespaceAutoProvision"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
-- /stdout --
I0805 11:56:15.069966 2827274 kubeadm.go:1160] stopping kube-system containers ...
I0805 11:56:15.070037 2827274 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0805 11:56:15.110885 2827274 docker.go:483] Stopping containers: [19a735639c57 32f524fefe4a c797bab538ca 3e00484e3be0 a8c0721274ad 55417568c28d 272dc1fcdd9e 2f134b2e41a8 b55b3ccef27e 91df8803f2ef 4502de65a9e1 f3d08f679e92 96b7b9f5153e bdf3e5498c95 e3a79a3b7215 04d98ecb2cf4 e52d9a9979f3 c7d982acfbad 8ce6f28d04e9 120e37448711 d7946ad36219 65db6ffe51e9 b838b3c1260b 341891d2fe8c 9b84d2913ca4 1ee9841cd504 2fe5364d8906 0aea6cf8ca35 a0feaed1a256 6a3aa1b2d857 b5f9058c6fce 6a22ec4d4c5d 3128c8ccffb4]
I0805 11:56:15.110978 2827274 ssh_runner.go:195] Run: docker stop 19a735639c57 32f524fefe4a c797bab538ca 3e00484e3be0 a8c0721274ad 55417568c28d 272dc1fcdd9e 2f134b2e41a8 b55b3ccef27e 91df8803f2ef 4502de65a9e1 f3d08f679e92 96b7b9f5153e bdf3e5498c95 e3a79a3b7215 04d98ecb2cf4 e52d9a9979f3 c7d982acfbad 8ce6f28d04e9 120e37448711 d7946ad36219 65db6ffe51e9 b838b3c1260b 341891d2fe8c 9b84d2913ca4 1ee9841cd504 2fe5364d8906 0aea6cf8ca35 a0feaed1a256 6a3aa1b2d857 b5f9058c6fce 6a22ec4d4c5d 3128c8ccffb4
I0805 11:56:16.073513 2827274 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0805 11:56:16.159793 2827274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0805 11:56:16.173849 2827274 kubeadm.go:157] found existing configuration files:
-rw------- 1 root root 5651 Aug 5 11:54 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Aug 5 11:54 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2007 Aug 5 11:54 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Aug 5 11:54 /etc/kubernetes/scheduler.conf
I0805 11:56:16.173909 2827274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I0805 11:56:16.190783 2827274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I0805 11:56:16.203478 2827274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I0805 11:56:16.218847 2827274 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0805 11:56:16.218927 2827274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0805 11:56:16.231789 2827274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I0805 11:56:16.244316 2827274 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0805 11:56:16.244382 2827274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0805 11:56:16.262809 2827274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0805 11:56:16.275717 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0805 11:56:16.375583 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0805 11:56:18.693297 2827274 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.317686347s)
I0805 11:56:18.693315 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0805 11:56:18.856864 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0805 11:56:18.938998 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0805 11:56:19.018369 2827274 api_server.go:52] waiting for apiserver process to appear ...
I0805 11:56:19.018432 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 11:56:19.518542 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 11:56:20.019327 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 11:56:20.518569 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 11:56:20.544615 2827274 api_server.go:72] duration metric: took 1.526246184s to wait for apiserver process to appear ...
I0805 11:56:20.544632 2827274 api_server.go:88] waiting for apiserver healthz status ...
I0805 11:56:20.544651 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0805 11:56:23.646936 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0805 11:56:23.646954 2827274 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0805 11:56:23.646967 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0805 11:56:23.706834 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0805 11:56:23.706853 2827274 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0805 11:56:24.045370 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0805 11:56:24.056829 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 11:56:24.056862 2827274 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 11:56:24.545342 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0805 11:56:24.553113 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0805 11:56:24.553136 2827274 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0805 11:56:25.044720 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0805 11:56:25.052740 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
I0805 11:56:25.066478 2827274 api_server.go:141] control plane version: v1.30.3
I0805 11:56:25.066498 2827274 api_server.go:131] duration metric: took 4.521860947s to wait for apiserver health ...
I0805 11:56:25.066506 2827274 cni.go:84] Creating CNI manager for ""
I0805 11:56:25.066517 2827274 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 11:56:25.069608 2827274 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0805 11:56:25.072316 2827274 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0805 11:56:25.084722 2827274 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0805 11:56:25.109900 2827274 system_pods.go:43] waiting for kube-system pods to appear ...
I0805 11:56:25.121312 2827274 system_pods.go:59] 7 kube-system pods found
I0805 11:56:25.121335 2827274 system_pods.go:61] "coredns-7db6d8ff4d-rznxg" [6cb35c48-f0fa-4441-84f0-6378d320b427] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0805 11:56:25.121342 2827274 system_pods.go:61] "etcd-functional-644345" [58c32004-eaf5-4ad2-95dc-87b3ea92fefe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0805 11:56:25.121350 2827274 system_pods.go:61] "kube-apiserver-functional-644345" [8354ea00-5b32-4bc0-ae24-758c6808e914] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0805 11:56:25.121358 2827274 system_pods.go:61] "kube-controller-manager-functional-644345" [60fd324e-ec79-4937-94a1-f7ac6b0d7bfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0805 11:56:25.121363 2827274 system_pods.go:61] "kube-proxy-lgl7w" [15952683-b4f7-4a4e-824a-f3e88a98c26f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0805 11:56:25.121369 2827274 system_pods.go:61] "kube-scheduler-functional-644345" [7e70b355-fd2d-41f1-a3e4-8fc93d2b84c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0805 11:56:25.121374 2827274 system_pods.go:61] "storage-provisioner" [33a98b3e-aef3-4edc-8e99-b9ab8f1c70de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0805 11:56:25.121380 2827274 system_pods.go:74] duration metric: took 11.468518ms to wait for pod list to return data ...
I0805 11:56:25.121388 2827274 node_conditions.go:102] verifying NodePressure condition ...
I0805 11:56:25.125598 2827274 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0805 11:56:25.125620 2827274 node_conditions.go:123] node cpu capacity is 2
I0805 11:56:25.125639 2827274 node_conditions.go:105] duration metric: took 4.246741ms to run NodePressure ...
I0805 11:56:25.125657 2827274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0805 11:56:25.400395 2827274 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0805 11:56:25.405381 2827274 kubeadm.go:739] kubelet initialised
I0805 11:56:25.405391 2827274 kubeadm.go:740] duration metric: took 4.982522ms waiting for restarted kubelet to initialise ...
I0805 11:56:25.405398 2827274 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0805 11:56:25.415112 2827274 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace to be "Ready" ...
I0805 11:56:27.421683 2827274 pod_ready.go:102] pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace has status "Ready":"False"
I0805 11:56:28.923189 2827274 pod_ready.go:92] pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:28.923201 2827274 pod_ready.go:81] duration metric: took 3.50807401s for pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace to be "Ready" ...
I0805 11:56:28.923210 2827274 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:30.929225 2827274 pod_ready.go:102] pod "etcd-functional-644345" in "kube-system" namespace has status "Ready":"False"
I0805 11:56:32.930935 2827274 pod_ready.go:92] pod "etcd-functional-644345" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:32.930951 2827274 pod_ready.go:81] duration metric: took 4.007730444s for pod "etcd-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:32.930960 2827274 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:34.937419 2827274 pod_ready.go:102] pod "kube-apiserver-functional-644345" in "kube-system" namespace has status "Ready":"False"
I0805 11:56:37.437335 2827274 pod_ready.go:102] pod "kube-apiserver-functional-644345" in "kube-system" namespace has status "Ready":"False"
I0805 11:56:38.937254 2827274 pod_ready.go:92] pod "kube-apiserver-functional-644345" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:38.937266 2827274 pod_ready.go:81] duration metric: took 6.006298812s for pod "kube-apiserver-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:38.937276 2827274 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:38.943319 2827274 pod_ready.go:92] pod "kube-controller-manager-functional-644345" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:38.943330 2827274 pod_ready.go:81] duration metric: took 6.048205ms for pod "kube-controller-manager-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:38.943339 2827274 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lgl7w" in "kube-system" namespace to be "Ready" ...
I0805 11:56:38.948847 2827274 pod_ready.go:92] pod "kube-proxy-lgl7w" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:38.948858 2827274 pod_ready.go:81] duration metric: took 5.513349ms for pod "kube-proxy-lgl7w" in "kube-system" namespace to be "Ready" ...
I0805 11:56:38.948868 2827274 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:38.954203 2827274 pod_ready.go:92] pod "kube-scheduler-functional-644345" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:38.954214 2827274 pod_ready.go:81] duration metric: took 5.339451ms for pod "kube-scheduler-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:38.954224 2827274 pod_ready.go:38] duration metric: took 13.548818162s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0805 11:56:38.954239 2827274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0805 11:56:38.961763 2827274 ops.go:34] apiserver oom_adj: -16
I0805 11:56:38.961775 2827274 kubeadm.go:597] duration metric: took 23.921097045s to restartPrimaryControlPlane
I0805 11:56:38.961783 2827274 kubeadm.go:394] duration metric: took 23.952428888s to StartCluster
I0805 11:56:38.961798 2827274 settings.go:142] acquiring lock: {Name:mk4a577f0ff710c971661155cffa585f8a233d3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 11:56:38.961865 2827274 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19377-2789855/kubeconfig
I0805 11:56:38.962514 2827274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19377-2789855/kubeconfig: {Name:mk43b20405f936d4b5b0f71673ce55a0d9a036ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0805 11:56:38.962731 2827274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0805 11:56:38.962982 2827274 config.go:182] Loaded profile config "functional-644345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 11:56:38.963012 2827274 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0805 11:56:38.963068 2827274 addons.go:69] Setting storage-provisioner=true in profile "functional-644345"
I0805 11:56:38.963090 2827274 addons.go:234] Setting addon storage-provisioner=true in "functional-644345"
W0805 11:56:38.963095 2827274 addons.go:243] addon storage-provisioner should already be in state true
I0805 11:56:38.963100 2827274 addons.go:69] Setting default-storageclass=true in profile "functional-644345"
I0805 11:56:38.963113 2827274 host.go:66] Checking if "functional-644345" exists ...
I0805 11:56:38.963136 2827274 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-644345"
I0805 11:56:38.963407 2827274 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 11:56:38.963505 2827274 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 11:56:38.966933 2827274 out.go:177] * Verifying Kubernetes components...
I0805 11:56:38.969902 2827274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0805 11:56:38.991189 2827274 addons.go:234] Setting addon default-storageclass=true in "functional-644345"
W0805 11:56:38.991287 2827274 addons.go:243] addon default-storageclass should already be in state true
I0805 11:56:38.991315 2827274 host.go:66] Checking if "functional-644345" exists ...
I0805 11:56:38.991719 2827274 cli_runner.go:164] Run: docker container inspect functional-644345 --format={{.State.Status}}
I0805 11:56:38.995605 2827274 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0805 11:56:38.998233 2827274 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0805 11:56:38.998245 2827274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0805 11:56:38.998318 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:56:39.015268 2827274 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0805 11:56:39.015280 2827274 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0805 11:56:39.015345 2827274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644345
I0805 11:56:39.033230 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 11:56:39.049921 2827274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36443 SSHKeyPath:/home/jenkins/minikube-integration/19377-2789855/.minikube/machines/functional-644345/id_rsa Username:docker}
I0805 11:56:39.152251 2827274 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0805 11:56:39.180381 2827274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0805 11:56:39.190694 2827274 node_ready.go:35] waiting up to 6m0s for node "functional-644345" to be "Ready" ...
I0805 11:56:39.194347 2827274 node_ready.go:49] node "functional-644345" has status "Ready":"True"
I0805 11:56:39.194359 2827274 node_ready.go:38] duration metric: took 3.645744ms for node "functional-644345" to be "Ready" ...
I0805 11:56:39.194368 2827274 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0805 11:56:39.201114 2827274 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace to be "Ready" ...
I0805 11:56:39.264085 2827274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0805 11:56:39.335083 2827274 pod_ready.go:92] pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:39.335095 2827274 pod_ready.go:81] duration metric: took 133.956963ms for pod "coredns-7db6d8ff4d-rznxg" in "kube-system" namespace to be "Ready" ...
I0805 11:56:39.335104 2827274 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:39.736734 2827274 pod_ready.go:92] pod "etcd-functional-644345" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:39.736747 2827274 pod_ready.go:81] duration metric: took 401.636357ms for pod "etcd-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:39.736756 2827274 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:39.972097 2827274 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0805 11:56:39.974641 2827274 addons.go:510] duration metric: took 1.01162047s for enable addons: enabled=[storage-provisioner default-storageclass]
I0805 11:56:40.136039 2827274 pod_ready.go:92] pod "kube-apiserver-functional-644345" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:40.136057 2827274 pod_ready.go:81] duration metric: took 399.29339ms for pod "kube-apiserver-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:40.136074 2827274 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:40.534908 2827274 pod_ready.go:92] pod "kube-controller-manager-functional-644345" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:40.534920 2827274 pod_ready.go:81] duration metric: took 398.838894ms for pod "kube-controller-manager-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:40.534931 2827274 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lgl7w" in "kube-system" namespace to be "Ready" ...
I0805 11:56:40.934944 2827274 pod_ready.go:92] pod "kube-proxy-lgl7w" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:40.934956 2827274 pod_ready.go:81] duration metric: took 400.019169ms for pod "kube-proxy-lgl7w" in "kube-system" namespace to be "Ready" ...
I0805 11:56:40.934967 2827274 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:41.334640 2827274 pod_ready.go:92] pod "kube-scheduler-functional-644345" in "kube-system" namespace has status "Ready":"True"
I0805 11:56:41.334652 2827274 pod_ready.go:81] duration metric: took 399.678519ms for pod "kube-scheduler-functional-644345" in "kube-system" namespace to be "Ready" ...
I0805 11:56:41.334663 2827274 pod_ready.go:38] duration metric: took 2.140285876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0805 11:56:41.334682 2827274 api_server.go:52] waiting for apiserver process to appear ...
I0805 11:56:41.334760 2827274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0805 11:56:41.347680 2827274 api_server.go:72] duration metric: took 2.384920838s to wait for apiserver process to appear ...
I0805 11:56:41.347696 2827274 api_server.go:88] waiting for apiserver healthz status ...
I0805 11:56:41.347715 2827274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0805 11:56:41.356292 2827274 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
I0805 11:56:41.357501 2827274 api_server.go:141] control plane version: v1.30.3
I0805 11:56:41.357516 2827274 api_server.go:131] duration metric: took 9.815179ms to wait for apiserver health ...
I0805 11:56:41.357523 2827274 system_pods.go:43] waiting for kube-system pods to appear ...
I0805 11:56:41.538606 2827274 system_pods.go:59] 7 kube-system pods found
I0805 11:56:41.538622 2827274 system_pods.go:61] "coredns-7db6d8ff4d-rznxg" [6cb35c48-f0fa-4441-84f0-6378d320b427] Running
I0805 11:56:41.538626 2827274 system_pods.go:61] "etcd-functional-644345" [58c32004-eaf5-4ad2-95dc-87b3ea92fefe] Running
I0805 11:56:41.538630 2827274 system_pods.go:61] "kube-apiserver-functional-644345" [8354ea00-5b32-4bc0-ae24-758c6808e914] Running
I0805 11:56:41.538634 2827274 system_pods.go:61] "kube-controller-manager-functional-644345" [60fd324e-ec79-4937-94a1-f7ac6b0d7bfb] Running
I0805 11:56:41.538637 2827274 system_pods.go:61] "kube-proxy-lgl7w" [15952683-b4f7-4a4e-824a-f3e88a98c26f] Running
I0805 11:56:41.538639 2827274 system_pods.go:61] "kube-scheduler-functional-644345" [7e70b355-fd2d-41f1-a3e4-8fc93d2b84c3] Running
I0805 11:56:41.538642 2827274 system_pods.go:61] "storage-provisioner" [33a98b3e-aef3-4edc-8e99-b9ab8f1c70de] Running
I0805 11:56:41.538647 2827274 system_pods.go:74] duration metric: took 181.11877ms to wait for pod list to return data ...
I0805 11:56:41.538654 2827274 default_sa.go:34] waiting for default service account to be created ...
I0805 11:56:41.734814 2827274 default_sa.go:45] found service account: "default"
I0805 11:56:41.734830 2827274 default_sa.go:55] duration metric: took 196.168581ms for default service account to be created ...
I0805 11:56:41.734838 2827274 system_pods.go:116] waiting for k8s-apps to be running ...
I0805 11:56:41.938240 2827274 system_pods.go:86] 7 kube-system pods found
I0805 11:56:41.938255 2827274 system_pods.go:89] "coredns-7db6d8ff4d-rznxg" [6cb35c48-f0fa-4441-84f0-6378d320b427] Running
I0805 11:56:41.938260 2827274 system_pods.go:89] "etcd-functional-644345" [58c32004-eaf5-4ad2-95dc-87b3ea92fefe] Running
I0805 11:56:41.938264 2827274 system_pods.go:89] "kube-apiserver-functional-644345" [8354ea00-5b32-4bc0-ae24-758c6808e914] Running
I0805 11:56:41.938268 2827274 system_pods.go:89] "kube-controller-manager-functional-644345" [60fd324e-ec79-4937-94a1-f7ac6b0d7bfb] Running
I0805 11:56:41.938271 2827274 system_pods.go:89] "kube-proxy-lgl7w" [15952683-b4f7-4a4e-824a-f3e88a98c26f] Running
I0805 11:56:41.938274 2827274 system_pods.go:89] "kube-scheduler-functional-644345" [7e70b355-fd2d-41f1-a3e4-8fc93d2b84c3] Running
I0805 11:56:41.938277 2827274 system_pods.go:89] "storage-provisioner" [33a98b3e-aef3-4edc-8e99-b9ab8f1c70de] Running
I0805 11:56:41.938282 2827274 system_pods.go:126] duration metric: took 203.440023ms to wait for k8s-apps to be running ...
I0805 11:56:41.938289 2827274 system_svc.go:44] waiting for kubelet service to be running ....
I0805 11:56:41.938353 2827274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0805 11:56:41.950885 2827274 system_svc.go:56] duration metric: took 12.586836ms WaitForService to wait for kubelet
I0805 11:56:41.950905 2827274 kubeadm.go:582] duration metric: took 2.98815192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0805 11:56:41.950923 2827274 node_conditions.go:102] verifying NodePressure condition ...
I0805 11:56:42.134783 2827274 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I0805 11:56:42.134802 2827274 node_conditions.go:123] node cpu capacity is 2
I0805 11:56:42.134812 2827274 node_conditions.go:105] duration metric: took 183.884619ms to run NodePressure ...
I0805 11:56:42.134824 2827274 start.go:241] waiting for startup goroutines ...
I0805 11:56:42.134831 2827274 start.go:246] waiting for cluster config update ...
I0805 11:56:42.134841 2827274 start.go:255] writing updated cluster config ...
I0805 11:56:42.135164 2827274 ssh_runner.go:195] Run: rm -f paused
I0805 11:56:42.218675 2827274 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
I0805 11:56:42.221818 2827274 out.go:177] * Done! kubectl is now configured to use "functional-644345" cluster and "default" namespace by default
==> Docker <==
Aug 05 11:56:45 functional-644345 dockerd[7136]: time="2024-08-05T11:56:45.911162112Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
Aug 05 11:56:45 functional-644345 dockerd[7136]: time="2024-08-05T11:56:45.911212311Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
Aug 05 11:56:49 functional-644345 dockerd[7136]: time="2024-08-05T11:56:49.017721699Z" level=info msg="ignoring event" container=d981e4b23a741f24094830c29f19b8690f3633b1b7c3a4f6d6e3ea65e456712c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 05 11:56:52 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:56:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c26d3ed09734bfd1cca82649d0f5b915e320bf99f2d8198dbef089fb46ba021c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Aug 05 11:56:53 functional-644345 dockerd[7136]: time="2024-08-05T11:56:53.175594712Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:56:53 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:56:53Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
Aug 05 11:56:58 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:56:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/786e607c149a43293cd9d9971c00ddc0a5cc2664c19106b3aeb5a5c905502d29/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
Aug 05 11:56:58 functional-644345 dockerd[7136]: time="2024-08-05T11:56:58.748892868Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:56:58 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:56:58Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
Aug 05 11:57:07 functional-644345 dockerd[7136]: time="2024-08-05T11:57:07.342212701Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:57:07 functional-644345 dockerd[7136]: time="2024-08-05T11:57:07.345385151Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:57:13 functional-644345 dockerd[7136]: time="2024-08-05T11:57:13.334327968Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:57:13 functional-644345 dockerd[7136]: time="2024-08-05T11:57:13.336895622Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:57:35 functional-644345 dockerd[7136]: time="2024-08-05T11:57:35.314523217Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:57:35 functional-644345 dockerd[7136]: time="2024-08-05T11:57:35.317297410Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:57:42 functional-644345 dockerd[7136]: time="2024-08-05T11:57:42.332569488Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:57:42 functional-644345 dockerd[7136]: time="2024-08-05T11:57:42.335182779Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:58:26 functional-644345 dockerd[7136]: time="2024-08-05T11:58:26.332978142Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:58:26 functional-644345 dockerd[7136]: time="2024-08-05T11:58:26.336246608Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:58:30 functional-644345 dockerd[7136]: time="2024-08-05T11:58:30.327206450Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:58:30 functional-644345 dockerd[7136]: time="2024-08-05T11:58:30.329936106Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:59:49 functional-644345 dockerd[7136]: time="2024-08-05T11:59:49.411437841Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:59:49 functional-644345 cri-dockerd[7422]: time="2024-08-05T11:59:49Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
Aug 05 11:59:58 functional-644345 dockerd[7136]: time="2024-08-05T11:59:58.326992472Z" level=error msg="Not continuing with pull after error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
Aug 05 11:59:58 functional-644345 dockerd[7136]: time="2024-08-05T11:59:58.329959820Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
21b72a56bd21e 2351f570ed0ea 3 minutes ago Running kube-proxy 3 e07603d186008 kube-proxy-lgl7w
45390a0d00675 ba04bb24b9575 3 minutes ago Running storage-provisioner 3 10d8812558b5d storage-provisioner
cdb82ed6a9bf9 2437cf7621777 3 minutes ago Running coredns 2 763d56c6ca45a coredns-7db6d8ff4d-rznxg
575b592e9bffc 61773190d42ff 3 minutes ago Running kube-apiserver 0 1044e89c68931 kube-apiserver-functional-644345
3e15f18c20f6c 8e97cdb19e7cc 3 minutes ago Running kube-controller-manager 3 74ae83d23c0ba kube-controller-manager-functional-644345
8c9d1dd88ef06 d48f992a22722 3 minutes ago Running kube-scheduler 3 5e371c3deda9a kube-scheduler-functional-644345
0533e9debf009 014faa467e297 3 minutes ago Running etcd 3 df6a7e3709663 etcd-functional-644345
19a735639c57b 014faa467e297 3 minutes ago Exited etcd 2 a8c0721274ad6 etcd-functional-644345
32f524fefe4ad 8e97cdb19e7cc 3 minutes ago Exited kube-controller-manager 2 272dc1fcdd9ec kube-controller-manager-functional-644345
c797bab538ca1 2351f570ed0ea 3 minutes ago Exited kube-proxy 2 91df8803f2ef5 kube-proxy-lgl7w
3e00484e3be06 d48f992a22722 3 minutes ago Exited kube-scheduler 2 4502de65a9e15 kube-scheduler-functional-644345
f3d08f679e92f ba04bb24b9575 4 minutes ago Exited storage-provisioner 2 b838b3c1260b9 storage-provisioner
96b7b9f5153ee 2437cf7621777 4 minutes ago Exited coredns 1 120e374487113 coredns-7db6d8ff4d-rznxg
==> coredns [96b7b9f5153e] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
CoreDNS-1.11.1
linux/arm64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:54057 - 3151 "HINFO IN 8660356540477454018.1651853706661492004. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026590878s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [cdb82ed6a9bf] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
CoreDNS-1.11.1
linux/arm64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:46394 - 53341 "HINFO IN 7099334815732562033.2827557498159571306. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021247939s
==> describe nodes <==
Name: functional-644345
Roles: control-plane
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=functional-644345
kubernetes.io/os=linux
minikube.k8s.io/commit=cfb202720123668c7435df1698a76741c3e0d87f
minikube.k8s.io/name=functional-644345
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_08_05T11_54_20_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 05 Aug 2024 11:54:17 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-644345
AcquireTime: <unset>
RenewTime: Mon, 05 Aug 2024 11:59:58 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 05 Aug 2024 11:56:23 +0000 Mon, 05 Aug 2024 11:54:13 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 05 Aug 2024 11:56:23 +0000 Mon, 05 Aug 2024 11:54:13 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 05 Aug 2024 11:56:23 +0000 Mon, 05 Aug 2024 11:54:13 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 05 Aug 2024 11:56:23 +0000 Mon, 05 Aug 2024 11:54:20 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: functional-644345
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022364Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022364Ki
pods: 110
System Info:
Machine ID: 3e8a09581c064f1493bc60872b585519
System UUID: 70e367f6-896f-4fe9-a485-c8492974a937
Boot ID: 055eef35-1ace-412e-809d-b7b68a43eb42
Kernel Version: 5.15.0-1066-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.30.3
Kube-Proxy Version: v1.30.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default nginx-svc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m7s
default sp-pod 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m2s
kube-system coredns-7db6d8ff4d-rznxg 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 5m26s
kube-system etcd-functional-644345 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 5m39s
kube-system kube-apiserver-functional-644345 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m35s
kube-system kube-controller-manager-functional-644345 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m39s
kube-system kube-proxy-lgl7w 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m26s
kube-system kube-scheduler-functional-644345 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m39s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m25s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (2%!)(MISSING) 170Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m24s kube-proxy
Normal Starting 3m34s kube-proxy
Normal Starting 4m21s kube-proxy
Normal Starting 5m47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m47s (x8 over 5m47s) kubelet Node functional-644345 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m47s (x8 over 5m47s) kubelet Node functional-644345 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m47s (x7 over 5m47s) kubelet Node functional-644345 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m47s kubelet Updated Node Allocatable limit across pods
Normal NodeNotReady 5m40s kubelet Node functional-644345 status is now: NodeNotReady
Normal Starting 5m40s kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 5m40s kubelet Node functional-644345 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m40s kubelet Node functional-644345 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 5m40s kubelet Node functional-644345 status is now: NodeHasSufficientMemory
Normal NodeAllocatableEnforced 5m39s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 5m39s kubelet Node functional-644345 status is now: NodeReady
Normal RegisteredNode 5m27s node-controller Node functional-644345 event: Registered Node functional-644345 in Controller
Warning ContainerGCFailed 4m40s kubelet rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Normal RegisteredNode 4m11s node-controller Node functional-644345 event: Registered Node functional-644345 in Controller
Normal Starting 3m41s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m40s (x8 over 3m40s) kubelet Node functional-644345 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m40s (x8 over 3m40s) kubelet Node functional-644345 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m40s (x7 over 3m40s) kubelet Node functional-644345 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m40s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 3m23s node-controller Node functional-644345 event: Registered Node functional-644345 in Controller
==> dmesg <==
[ +0.000642] FS-Cache: N-cookie c=000000ad [p=000000a4 fl=2 nc=0 na=1]
[ +0.000845] FS-Cache: N-cookie d=000000005957fd62{9p.inode} n=00000000e51524aa
[ +0.000975] FS-Cache: N-key=[8] '4b6d3b0000000000'
[ +0.006808] FS-Cache: Duplicate cookie detected
[ +0.000650] FS-Cache: O-cookie c=000000a7 [p=000000a4 fl=226 nc=0 na=1]
[ +0.000921] FS-Cache: O-cookie d=000000005957fd62{9p.inode} n=000000009ca223dc
[ +0.000977] FS-Cache: O-key=[8] '4b6d3b0000000000'
[ +0.000649] FS-Cache: N-cookie c=000000ae [p=000000a4 fl=2 nc=0 na=1]
[ +0.000898] FS-Cache: N-cookie d=000000005957fd62{9p.inode} n=000000008c1874a4
[ +0.000963] FS-Cache: N-key=[8] '4b6d3b0000000000'
[ +2.309208] FS-Cache: Duplicate cookie detected
[ +0.000654] FS-Cache: O-cookie c=000000a5 [p=000000a4 fl=226 nc=0 na=1]
[ +0.000938] FS-Cache: O-cookie d=000000005957fd62{9p.inode} n=00000000e3df5fa7
[ +0.000991] FS-Cache: O-key=[8] '4a6d3b0000000000'
[ +0.000657] FS-Cache: N-cookie c=000000b0 [p=000000a4 fl=2 nc=0 na=1]
[ +0.000867] FS-Cache: N-cookie d=000000005957fd62{9p.inode} n=0000000013f782bc
[ +0.000964] FS-Cache: N-key=[8] '4a6d3b0000000000'
[ +0.334642] FS-Cache: Duplicate cookie detected
[ +0.000744] FS-Cache: O-cookie c=000000aa [p=000000a4 fl=226 nc=0 na=1]
[ +0.000899] FS-Cache: O-cookie d=000000005957fd62{9p.inode} n=0000000061ef2b15
[ +0.000955] FS-Cache: O-key=[8] '506d3b0000000000'
[ +0.000663] FS-Cache: N-cookie c=000000b1 [p=000000a4 fl=2 nc=0 na=1]
[ +0.000856] FS-Cache: N-cookie d=000000005957fd62{9p.inode} n=00000000e51524aa
[ +0.000959] FS-Cache: N-key=[8] '506d3b0000000000'
[Aug 5 11:19] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
==> etcd [0533e9debf00] <==
{"level":"info","ts":"2024-08-05T11:56:19.765271Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T11:56:19.765279Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T11:56:19.765492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2024-08-05T11:56:19.765535Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2024-08-05T11:56:19.765611Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T11:56:19.765637Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T11:56:19.774492Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-08-05T11:56:19.774715Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-08-05T11:56:19.77474Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-08-05T11:56:19.774855Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-08-05T11:56:19.774862Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-08-05T11:56:21.342524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
{"level":"info","ts":"2024-08-05T11:56:21.342826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
{"level":"info","ts":"2024-08-05T11:56:21.342994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
{"level":"info","ts":"2024-08-05T11:56:21.34316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
{"level":"info","ts":"2024-08-05T11:56:21.343265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
{"level":"info","ts":"2024-08-05T11:56:21.34338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
{"level":"info","ts":"2024-08-05T11:56:21.343474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
{"level":"info","ts":"2024-08-05T11:56:21.345628Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-644345 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2024-08-05T11:56:21.345997Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T11:56:21.346005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2024-08-05T11:56:21.348222Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2024-08-05T11:56:21.348262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2024-08-05T11:56:21.348193Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2024-08-05T11:56:21.35975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
==> etcd [19a735639c57] <==
{"level":"info","ts":"2024-08-05T11:56:14.527365Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.48681ms"}
{"level":"info","ts":"2024-08-05T11:56:14.554208Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2024-08-05T11:56:14.564004Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","commit-index":595}
{"level":"info","ts":"2024-08-05T11:56:14.57137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
{"level":"info","ts":"2024-08-05T11:56:14.571603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 3"}
{"level":"info","ts":"2024-08-05T11:56:14.571616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 3, commit: 595, applied: 0, lastindex: 595, lastterm: 3]"}
{"level":"warn","ts":"2024-08-05T11:56:14.572782Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2024-08-05T11:56:14.582875Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":565}
{"level":"info","ts":"2024-08-05T11:56:14.586455Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2024-08-05T11:56:14.599548Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aec36adc501070cc","timeout":"7s"}
{"level":"info","ts":"2024-08-05T11:56:14.59989Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aec36adc501070cc"}
{"level":"info","ts":"2024-08-05T11:56:14.599921Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
{"level":"info","ts":"2024-08-05T11:56:14.600146Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2024-08-05T11:56:14.600361Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T11:56:14.6004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T11:56:14.60041Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2024-08-05T11:56:14.600625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2024-08-05T11:56:14.600672Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2024-08-05T11:56:14.600751Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T11:56:14.600775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2024-08-05T11:56:14.614342Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2024-08-05T11:56:14.615444Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2024-08-05T11:56:14.615485Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2024-08-05T11:56:14.615667Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2024-08-05T11:56:14.615676Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
==> kernel <==
11:59:59 up 19:42, 0 users, load average: 0.48, 1.55, 2.04
Linux functional-644345 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kube-apiserver [575b592e9bff] <==
I0805 11:56:23.746613 1 shared_informer.go:320] Caches are synced for node_authorizer
I0805 11:56:23.751857 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0805 11:56:23.752140 1 policy_source.go:224] refreshing policies
I0805 11:56:23.754077 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0805 11:56:23.812660 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0805 11:56:23.813245 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0805 11:56:23.814990 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0805 11:56:23.815019 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0805 11:56:23.819065 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0805 11:56:23.819340 1 handler_discovery.go:447] Starting ResourceDiscoveryManager
I0805 11:56:23.819370 1 aggregator.go:165] initial CRD sync complete...
I0805 11:56:23.819601 1 autoregister_controller.go:141] Starting autoregister controller
I0805 11:56:23.819728 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0805 11:56:23.819828 1 cache.go:39] Caches are synced for autoregister controller
E0805 11:56:23.821486 1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
I0805 11:56:24.622593 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0805 11:56:25.249818 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0805 11:56:25.263472 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0805 11:56:25.317876 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0805 11:56:25.373571 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0805 11:56:25.382707 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0805 11:56:36.696541 1 controller.go:615] quota admission added evaluator for: endpoints
I0805 11:56:36.746405 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0805 11:56:45.224417 1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.190.197"}
I0805 11:56:52.291210 1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.198.96"}
==> kube-controller-manager [32f524fefe4a] <==
==> kube-controller-manager [3e15f18c20f6] <==
I0805 11:56:36.440920 1 shared_informer.go:320] Caches are synced for GC
I0805 11:56:36.447489 1 shared_informer.go:320] Caches are synced for TTL
I0805 11:56:36.449766 1 shared_informer.go:320] Caches are synced for namespace
I0805 11:56:36.449894 1 shared_informer.go:320] Caches are synced for bootstrap_signer
I0805 11:56:36.451557 1 shared_informer.go:320] Caches are synced for PV protection
I0805 11:56:36.454915 1 shared_informer.go:320] Caches are synced for ephemeral
I0805 11:56:36.460118 1 shared_informer.go:320] Caches are synced for node
I0805 11:56:36.460236 1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
I0805 11:56:36.460293 1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
I0805 11:56:36.460304 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
I0805 11:56:36.460311 1 shared_informer.go:320] Caches are synced for cidrallocator
I0805 11:56:36.463780 1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
I0805 11:56:36.470133 1 shared_informer.go:320] Caches are synced for ReplicaSet
I0805 11:56:36.470331 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="136.311µs"
I0805 11:56:36.479572 1 shared_informer.go:320] Caches are synced for stateful set
I0805 11:56:36.520827 1 shared_informer.go:320] Caches are synced for endpoint_slice
I0805 11:56:36.538957 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
I0805 11:56:36.554302 1 shared_informer.go:320] Caches are synced for resource quota
I0805 11:56:36.592042 1 shared_informer.go:320] Caches are synced for ReplicationController
I0805 11:56:36.593284 1 shared_informer.go:320] Caches are synced for resource quota
I0805 11:56:36.609846 1 shared_informer.go:320] Caches are synced for disruption
I0805 11:56:36.632188 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
I0805 11:56:37.061380 1 shared_informer.go:320] Caches are synced for garbage collector
I0805 11:56:37.061439 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0805 11:56:37.089869 1 shared_informer.go:320] Caches are synced for garbage collector
==> kube-proxy [21b72a56bd21] <==
I0805 11:56:24.894839 1 server_linux.go:69] "Using iptables proxy"
I0805 11:56:24.934457 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
I0805 11:56:24.986052 1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0805 11:56:24.986337 1 server_linux.go:165] "Using iptables Proxier"
I0805 11:56:24.988578 1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
I0805 11:56:24.988730 1 server_linux.go:528] "Defaulting to no-op detect-local"
I0805 11:56:24.988845 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0805 11:56:24.989179 1 server.go:872] "Version info" version="v1.30.3"
I0805 11:56:24.989512 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0805 11:56:24.990575 1 config.go:192] "Starting service config controller"
I0805 11:56:24.990956 1 shared_informer.go:313] Waiting for caches to sync for service config
I0805 11:56:24.991136 1 config.go:101] "Starting endpoint slice config controller"
I0805 11:56:24.991219 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0805 11:56:24.992771 1 config.go:319] "Starting node config controller"
I0805 11:56:24.992791 1 shared_informer.go:313] Waiting for caches to sync for node config
I0805 11:56:25.091723 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0805 11:56:25.091795 1 shared_informer.go:320] Caches are synced for service config
I0805 11:56:25.094591 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [c797bab538ca] <==
I0805 11:56:14.727627 1 server_linux.go:69] "Using iptables proxy"
E0805 11:56:14.738157 1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644345\": dial tcp 192.168.49.2:8441: connect: connection refused"
==> kube-scheduler [3e00484e3be0] <==
==> kube-scheduler [8c9d1dd88ef0] <==
I0805 11:56:21.391514 1 serving.go:380] Generated self-signed cert in-memory
W0805 11:56:23.722123 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0805 11:56:23.722227 1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0805 11:56:23.722258 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0805 11:56:23.722306 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0805 11:56:23.758160 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
I0805 11:56:23.758197 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0805 11:56:23.760406 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0805 11:56:23.761092 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0805 11:56:23.771365 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0805 11:56:23.771410 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0805 11:56:23.871850 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Aug 05 11:58:26 functional-644345 kubelet[8900]: E0805 11:58:26.344499 8900 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfhr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(e3c3985a-4f4d
-4dad-a476-587a0ab830e7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 05 11:58:26 functional-644345 kubelet[8900]: E0805 11:58:26.344919 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
Aug 05 11:58:30 functional-644345 kubelet[8900]: E0805 11:58:30.330633 8900 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
Aug 05 11:58:30 functional-644345 kubelet[8900]: E0805 11:58:30.330691 8900 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
Aug 05 11:58:30 functional-644345 kubelet[8900]: E0805 11:58:30.330792 8900 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4js6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(7916faf8-6ac9-46d3-aed5-006a182fd8d7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 05 11:58:30 functional-644345 kubelet[8900]: E0805 11:58:30.330823 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
Aug 05 11:58:39 functional-644345 kubelet[8900]: E0805 11:58:39.085490 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
Aug 05 11:58:42 functional-644345 kubelet[8900]: E0805 11:58:42.061728 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
Aug 05 11:58:51 functional-644345 kubelet[8900]: E0805 11:58:51.062149 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
Aug 05 11:58:54 functional-644345 kubelet[8900]: E0805 11:58:54.061934 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
Aug 05 11:59:05 functional-644345 kubelet[8900]: E0805 11:59:05.063723 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
Aug 05 11:59:06 functional-644345 kubelet[8900]: E0805 11:59:06.061663 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
Aug 05 11:59:19 functional-644345 kubelet[8900]: E0805 11:59:19.062828 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
Aug 05 11:59:20 functional-644345 kubelet[8900]: E0805 11:59:20.062000 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
Aug 05 11:59:32 functional-644345 kubelet[8900]: E0805 11:59:32.061177 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
Aug 05 11:59:35 functional-644345 kubelet[8900]: E0805 11:59:35.063403 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
Aug 05 11:59:43 functional-644345 kubelet[8900]: E0805 11:59:43.062281 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
Aug 05 11:59:49 functional-644345 kubelet[8900]: E0805 11:59:49.414757 8900 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
Aug 05 11:59:49 functional-644345 kubelet[8900]: E0805 11:59:49.414811 8900 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
Aug 05 11:59:49 functional-644345 kubelet[8900]: E0805 11:59:49.414897 8900 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfhr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(e3c3985a-4f4d
-4dad-a476-587a0ab830e7): ErrImagePull: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 05 11:59:49 functional-644345 kubelet[8900]: E0805 11:59:49.414928 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e3c3985a-4f4d-4dad-a476-587a0ab830e7"
Aug 05 11:59:58 functional-644345 kubelet[8900]: E0805 11:59:58.330560 8900 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
Aug 05 11:59:58 functional-644345 kubelet[8900]: E0805 11:59:58.330647 8900 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
Aug 05 11:59:58 functional-644345 kubelet[8900]: E0805 11:59:58.331006 8900 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4js6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start
failed in pod sp-pod_default(7916faf8-6ac9-46d3-aed5-006a182fd8d7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 05 11:59:58 functional-644345 kubelet[8900]: E0805 11:59:58.331039 8900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7916faf8-6ac9-46d3-aed5-006a182fd8d7"
==> storage-provisioner [45390a0d0067] <==
I0805 11:56:24.960249 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0805 11:56:24.977712 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0805 11:56:24.978470 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0805 11:56:42.384427 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0805 11:56:42.384824 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-644345_8b944752-9df4-4fd9-9411-17b1358aee8d!
I0805 11:56:42.386277 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d94ea9c4-0626-49b7-9ee8-92bd8a1db863", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-644345_8b944752-9df4-4fd9-9411-17b1358aee8d became leader
I0805 11:56:42.485884 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-644345_8b944752-9df4-4fd9-9411-17b1358aee8d!
I0805 11:56:57.617633 1 controller.go:1332] provision "default/myclaim" class "standard": started
I0805 11:56:57.622724 1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard 4ec02f3a-9b01-4dce-a8f9-defe18b5ab8d 384 0 2024-08-05 11:54:34 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
storageclass.kubernetes.io/is-default-class:true] [] [] [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-05 11:54:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b &PersistentVolumeClaim{ObjectMeta:{myclaim default 1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b 714 0 2024-08-05 11:56:57 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection] [{kube-controller-manager Update v1 2024-08-05 11:56:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-05 11:56:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
I0805 11:56:57.623242 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
I0805 11:56:57.628342 1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b" provisioned
I0805 11:56:57.632774 1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
I0805 11:56:57.632792 1 volume_store.go:212] Trying to save persistentvolume "pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b"
I0805 11:56:57.657364 1 volume_store.go:219] persistentvolume "pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b" saved
I0805 11:56:57.671887 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-1e7089d2-3636-4c60-aeaa-1c1c7dcd8f5b
==> storage-provisioner [f3d08f679e92] <==
I0805 11:55:45.974708 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0805 11:55:45.991107 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0805 11:55:45.991160 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644345 -n functional-644345
helpers_test.go:254: (dbg) Done: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-644345 -n functional-644345: (1.298116031s)
helpers_test.go:261: (dbg) Run: kubectl --context functional-644345 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context functional-644345 describe pod nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-644345 describe pod nginx-svc sp-pod:
-- stdout --
Name: nginx-svc
Namespace: default
Priority: 0
Service Account: default
Node: functional-644345/192.168.49.2
Start Time: Mon, 05 Aug 2024 11:56:52 +0000
Labels: run=nginx-svc
Annotations: <none>
Status: Pending
IP: 10.244.0.8
IPs:
IP: 10.244.0.8
Containers:
nginx:
Container ID:
Image: docker.io/nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sfhr2 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-sfhr2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m9s default-scheduler Successfully assigned default/nginx-svc to functional-644345
Warning Failed 3m8s kubelet Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal Pulling 95s (x4 over 3m9s) kubelet Pulling image "docker.io/nginx:alpine"
Warning Failed 95s (x4 over 3m8s) kubelet Error: ErrImagePull
Warning Failed 95s (x3 over 2m54s) kubelet Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 82s (x6 over 3m8s) kubelet Error: ImagePullBackOff
Normal BackOff 70s (x7 over 3m8s) kubelet Back-off pulling image "docker.io/nginx:alpine"
Name: sp-pod
Namespace: default
Priority: 0
Service Account: default
Node: functional-644345/192.168.49.2
Start Time: Mon, 05 Aug 2024 11:56:57 +0000
Labels: test=storage-provisioner
Annotations: <none>
Status: Pending
IP: 10.244.0.9
IPs:
IP: 10.244.0.9
Containers:
myfrontend:
Container ID:
Image: docker.io/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4js6j (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim
ReadOnly: false
kube-api-access-4js6j:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m4s default-scheduler Successfully assigned default/sp-pod to functional-644345
Warning Failed 3m3s kubelet Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal Pulling 91s (x4 over 3m3s) kubelet Pulling image "docker.io/nginx"
Warning Failed 91s (x4 over 3m3s) kubelet Error: ErrImagePull
Warning Failed 91s (x3 over 2m48s) kubelet Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning Failed 79s (x6 over 3m2s) kubelet Error: ImagePullBackOff
Normal BackOff 67s (x7 over 3m2s) kubelet Back-off pulling image "docker.io/nginx"
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.54s)