=== RUN TestPause/serial/Start
pause_test.go:80: (dbg) Run: out/minikube-linux-amd64 start -p pause-20220412195428-42006 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=containerd
=== CONT TestPause/serial/Start
pause_test.go:80: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-20220412195428-42006 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=containerd: exit status 80 (8m9.199615017s)
-- stdout --
* [pause-20220412195428-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=13812
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
- More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
* Using Docker driver with the root privilege
* Starting control plane node pause-20220412195428-42006 in cluster pause-20220412195428-42006
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* docker "pause-20220412195428-42006" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
- kubelet.cni-conf-dir=/etc/cni/net.mk
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
-- /stdout --
** stderr **
! Your cgroup does not allow setting memory.
! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname pause-20220412195428-42006 --name pause-20220412195428-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-20220412195428-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=pause-20220412195428-42006 --network pause-20220412195428-42006 --ip 192.168.67.2 --volume pause-20220412195428-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
stdout:
f9d0991530d0581b50e510758083539a7e482f67eb3b3d1a5507dfc7ef305bba
stderr:
docker: Error response from daemon: network pause-20220412195428-42006 not found.
E0412 19:58:03.552730 200789 docker.go:186] "Failed to stop" err=<
sudo service docker.socket stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
> service="docker.socket"
E0412 19:58:04.008773 200789 docker.go:189] "Failed to stop" err=<
sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
> service="docker.service"
X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-linux-amd64 start -p pause-20220412195428-42006 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=containerd" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-20220412195428-42006
helpers_test.go:235: (dbg) docker inspect pause-20220412195428-42006:
-- stdout --
[
{
"Id": "74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d",
"Created": "2022-04-12T19:57:59.319875771Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 227184,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-04-12T19:57:59.700960531Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
"ResolvConfPath": "/var/lib/docker/containers/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d/hostname",
"HostsPath": "/var/lib/docker/containers/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d/hosts",
"LogPath": "/var/lib/docker/containers/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d-json.log",
"Name": "/pause-20220412195428-42006",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-20220412195428-42006:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-20220412195428-42006",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/77d0be22511bacc209e251f4dcc4a6bae6e4d3088c7edd8ca98ecb8ee3188f74-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
"MergedDir": "/var/lib/docker/overlay2/77d0be22511bacc209e251f4dcc4a6bae6e4d3088c7edd8ca98ecb8ee3188f74/merged",
"UpperDir": "/var/lib/docker/overlay2/77d0be22511bacc209e251f4dcc4a6bae6e4d3088c7edd8ca98ecb8ee3188f74/diff",
"WorkDir": "/var/lib/docker/overlay2/77d0be22511bacc209e251f4dcc4a6bae6e4d3088c7edd8ca98ecb8ee3188f74/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-20220412195428-42006",
"Source": "/var/lib/docker/volumes/pause-20220412195428-42006/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-20220412195428-42006",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-20220412195428-42006",
"name.minikube.sigs.k8s.io": "pause-20220412195428-42006",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "0f75bb928e8ac027a49c4dc78bb37c6cdee8489247947ecc90db35c496d71abf",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49377"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49376"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49373"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49375"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49374"
}
]
},
"SandboxKey": "/var/run/docker/netns/0f75bb928e8a",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-20220412195428-42006": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": [
"74a6b7630f45",
"pause-20220412195428-42006"
],
"NetworkID": "e0e8680058858d6f3017b8c830a2946a7333d8bdab094ded846fda14f9ccfd15",
"EndpointID": "d50e791c3b92160155181af9dd3b2783ec53a3ec635f7aeae2f0cb5d5b09bbd9",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20220412195428-42006 -n pause-20220412195428-42006
helpers_test.go:244: <<< TestPause/serial/Start FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/Start]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-20220412195428-42006 logs -n 25
helpers_test.go:252: TestPause/serial/Start logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
| start | -p | missing-upgrade-20220412195111-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:52:41 UTC | Tue, 12 Apr 2022 19:53:40 UTC |
| | missing-upgrade-20220412195111-42006 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | kubernetes-upgrade-20220412195142-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:52:48 UTC | Tue, 12 Apr 2022 19:53:43 UTC |
| | kubernetes-upgrade-20220412195142-42006 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.23.6-rc.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | missing-upgrade-20220412195111-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:53:40 UTC | Tue, 12 Apr 2022 19:53:44 UTC |
| | missing-upgrade-20220412195111-42006 | | | | | |
| start | -p | running-upgrade-20220412195256-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:53:40 UTC | Tue, 12 Apr 2022 19:54:20 UTC |
| | running-upgrade-20220412195256-42006 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | running-upgrade-20220412195256-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:20 UTC | Tue, 12 Apr 2022 19:54:23 UTC |
| | running-upgrade-20220412195256-42006 | | | | | |
| start | -p | kubernetes-upgrade-20220412195142-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:53:43 UTC | Tue, 12 Apr 2022 19:54:25 UTC |
| | kubernetes-upgrade-20220412195142-42006 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.23.6-rc.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | kubernetes-upgrade-20220412195142-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:26 UTC | Tue, 12 Apr 2022 19:54:29 UTC |
| | kubernetes-upgrade-20220412195142-42006 | | | | | |
| start | -p | cert-options-20220412195344-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:53:44 UTC | Tue, 12 Apr 2022 19:54:33 UTC |
| | cert-options-20220412195344-42006 | | | | | |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| -p | cert-options-20220412195344-42006 | cert-options-20220412195344-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:33 UTC | Tue, 12 Apr 2022 19:54:34 UTC |
| | ssh openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p | cert-options-20220412195344-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:34 UTC | Tue, 12 Apr 2022 19:54:34 UTC |
| | cert-options-20220412195344-42006 | | | | | |
| | -- sudo cat | | | | | |
| | /etc/kubernetes/admin.conf | | | | | |
| delete | -p | cert-options-20220412195344-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:34 UTC | Tue, 12 Apr 2022 19:54:42 UTC |
| | cert-options-20220412195344-42006 | | | | | |
| start | -p auto-20220412195201-42006 | auto-20220412195201-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:29 UTC | Tue, 12 Apr 2022 19:55:30 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p auto-20220412195201-42006 | auto-20220412195201-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:30 UTC | Tue, 12 Apr 2022 19:55:31 UTC |
| | pgrep -a kubelet | | | | | |
| delete | -p auto-20220412195201-42006 | auto-20220412195201-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:45 UTC | Tue, 12 Apr 2022 19:55:47 UTC |
| start | -p | custom-weave-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:42 UTC | Tue, 12 Apr 2022 19:55:57 UTC |
| | custom-weave-20220412195203-42006 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=testdata/weavenet.yaml | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p | custom-weave-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:57 UTC | Tue, 12 Apr 2022 19:55:57 UTC |
| | custom-weave-20220412195203-42006 | | | | | |
| | pgrep -a kubelet | | | | | |
| start | -p | cert-expiration-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:52:03 UTC | Tue, 12 Apr 2022 19:56:06 UTC |
| | cert-expiration-20220412195203-42006 | | | | | |
| | --memory=2048 --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | custom-weave-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:56:06 UTC | Tue, 12 Apr 2022 19:56:09 UTC |
| | custom-weave-20220412195203-42006 | | | | | |
| start | -p cilium-20220412195203-42006 | cilium-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:47 UTC | Tue, 12 Apr 2022 19:57:10 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=cilium --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p cilium-20220412195203-42006 | cilium-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:15 UTC | Tue, 12 Apr 2022 19:57:15 UTC |
| | pgrep -a kubelet | | | | | |
| delete | -p cilium-20220412195203-42006 | cilium-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:26 UTC | Tue, 12 Apr 2022 19:57:29 UTC |
| start | -p | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:29 UTC | Tue, 12 Apr 2022 19:58:30 UTC |
| | enable-default-cni-20220412195202-42006 | | | | | |
| | --memory=2048 --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --enable-default-cni=true | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:58:31 UTC | Tue, 12 Apr 2022 19:58:31 UTC |
| | enable-default-cni-20220412195202-42006 | | | | | |
| | pgrep -a kubelet | | | | | |
| start | -p | cert-expiration-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:59:06 UTC | Tue, 12 Apr 2022 19:59:21 UTC |
| | cert-expiration-20220412195203-42006 | | | | | |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | cert-expiration-20220412195203-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:59:21 UTC | Tue, 12 Apr 2022 19:59:24 UTC |
| | cert-expiration-20220412195203-42006 | | | | | |
|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2022/04/12 19:59:24
Running on machine: ubuntu-20-agent-11
Binary: Built with gc go1.18 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0412 19:59:24.334098 234625 out.go:297] Setting OutFile to fd 1 ...
I0412 19:59:24.334239 234625 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0412 19:59:24.334252 234625 out.go:310] Setting ErrFile to fd 2...
I0412 19:59:24.334260 234625 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0412 19:59:24.334387 234625 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
I0412 19:59:24.334683 234625 out.go:304] Setting JSON to false
I0412 19:59:24.336564 234625 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9718,"bootTime":1649783847,"procs":934,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0412 19:59:24.336639 234625 start.go:125] virtualization: kvm guest
I0412 19:59:24.339497 234625 out.go:176] * [kindnet-20220412195202-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
I0412 19:59:24.341050 234625 out.go:176] - MINIKUBE_LOCATION=13812
I0412 19:59:24.339701 234625 notify.go:193] Checking for updates...
I0412 19:59:24.342602 234625 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0412 19:59:24.344189 234625 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0412 19:59:24.345784 234625 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
I0412 19:59:24.347397 234625 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I0412 19:59:24.347890 234625 config.go:178] Loaded profile config "calico-20220412195203-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0412 19:59:24.347994 234625 config.go:178] Loaded profile config "enable-default-cni-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0412 19:59:24.348140 234625 config.go:178] Loaded profile config "pause-20220412195428-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0412 19:59:24.348206 234625 driver.go:346] Setting default libvirt URI to qemu:///system
I0412 19:59:24.394733 234625 docker.go:137] docker version: linux-20.10.14
I0412 19:59:24.394842 234625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0412 19:59:24.495597 234625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 19:59:24.426483159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0412 19:59:24.495701 234625 docker.go:254] overlay module found
I0412 19:59:24.498059 234625 out.go:176] * Using the docker driver based on user configuration
I0412 19:59:24.498101 234625 start.go:284] selected driver: docker
I0412 19:59:24.498109 234625 start.go:801] validating driver "docker" against <nil>
I0412 19:59:24.498154 234625 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
W0412 19:59:24.498233 234625 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0412 19:59:24.498258 234625 out.go:241] ! Your cgroup does not allow setting memory.
I0412 19:59:24.499962 234625 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0412 19:59:24.500690 234625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0412 19:59:24.600012 234625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 19:59:24.531537467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0412 19:59:24.600181 234625 start_flags.go:292] no existing cluster config was found, will generate one from the flags
I0412 19:59:24.600379 234625 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0412 19:59:24.602706 234625 out.go:176] * Using Docker driver with the root privilege
I0412 19:59:24.602738 234625 cni.go:93] Creating CNI manager for "kindnet"
I0412 19:59:24.602753 234625 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0412 19:59:24.602762 234625 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0412 19:59:24.602775 234625 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
I0412 19:59:24.602791 234625 start_flags.go:306] config:
{Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0412 19:59:24.604933 234625 out.go:176] * Starting control plane node kindnet-20220412195202-42006 in cluster kindnet-20220412195202-42006
I0412 19:59:24.605003 234625 cache.go:120] Beginning downloading kic base image for docker with containerd
I0412 19:59:24.606597 234625 out.go:176] * Pulling base image ...
I0412 19:59:24.606630 234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0412 19:59:24.606673 234625 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
I0412 19:59:24.606687 234625 cache.go:57] Caching tarball of preloaded images
I0412 19:59:24.606723 234625 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
I0412 19:59:24.606991 234625 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0412 19:59:24.607011 234625 cache.go:60] Finished verifying existence of preloaded tar for v1.23.5 on containerd
I0412 19:59:24.607155 234625 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json ...
I0412 19:59:24.607189 234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json: {Name:mk96c1d1e18e9cc0d948a88792a7261621bb1906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 19:59:24.657122 234625 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
I0412 19:59:24.657151 234625 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
I0412 19:59:24.657174 234625 cache.go:206] Successfully downloaded all kic artifacts
I0412 19:59:24.657214 234625 start.go:352] acquiring machines lock for kindnet-20220412195202-42006: {Name:mk9278724d41a33f689e63fe04712fa9ece6a9db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0412 19:59:24.657383 234625 start.go:356] acquired machines lock for "kindnet-20220412195202-42006" in 129.688µs
I0412 19:59:24.657415 234625 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0412 19:59:24.657537 234625 start.go:131] createHost starting for "" (driver="docker")
I0412 19:59:23.217220 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:25.218123 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:27.717321 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:24.392933 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:26.893188 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:24.660324 234625 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0412 19:59:24.660584 234625 start.go:165] libmachine.API.Create for "kindnet-20220412195202-42006" (driver="docker")
I0412 19:59:24.660619 234625 client.go:168] LocalClient.Create starting
I0412 19:59:24.660700 234625 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
I0412 19:59:24.660743 234625 main.go:134] libmachine: Decoding PEM data...
I0412 19:59:24.660767 234625 main.go:134] libmachine: Parsing certificate...
I0412 19:59:24.660848 234625 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
I0412 19:59:24.660881 234625 main.go:134] libmachine: Decoding PEM data...
I0412 19:59:24.660901 234625 main.go:134] libmachine: Parsing certificate...
I0412 19:59:24.661225 234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0412 19:59:24.694938 234625 cli_runner.go:211] docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0412 19:59:24.695024 234625 network_create.go:272] running [docker network inspect kindnet-20220412195202-42006] to gather additional debugging logs...
I0412 19:59:24.695052 234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006
W0412 19:59:24.730811 234625 cli_runner.go:211] docker network inspect kindnet-20220412195202-42006 returned with exit code 1
I0412 19:59:24.730843 234625 network_create.go:275] error running [docker network inspect kindnet-20220412195202-42006]: docker network inspect kindnet-20220412195202-42006: exit status 1
stdout:
[]
stderr:
Error: No such network: kindnet-20220412195202-42006
I0412 19:59:24.730878 234625 network_create.go:277] output of [docker network inspect kindnet-20220412195202-42006]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: kindnet-20220412195202-42006
** /stderr **
I0412 19:59:24.730940 234625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0412 19:59:24.768260 234625 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3941532cd703 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:87:d3:29:2b}}
I0412 19:59:24.768721 234625 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6a56a3e6bf06 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9a:ff:38:75}}
I0412 19:59:24.769301 234625 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000c22240] misses:0}
I0412 19:59:24.769343 234625 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0412 19:59:24.769356 234625 network_create.go:115] attempt to create docker network kindnet-20220412195202-42006 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0412 19:59:24.769429 234625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220412195202-42006
I0412 19:59:24.841511 234625 network_create.go:99] docker network kindnet-20220412195202-42006 192.168.67.0/24 created
I0412 19:59:24.841545 234625 kic.go:106] calculated static IP "192.168.67.2" for the "kindnet-20220412195202-42006" container
I0412 19:59:24.841619 234625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0412 19:59:24.877293 234625 cli_runner.go:164] Run: docker volume create kindnet-20220412195202-42006 --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --label created_by.minikube.sigs.k8s.io=true
I0412 19:59:24.915458 234625 oci.go:103] Successfully created a docker volume kindnet-20220412195202-42006
I0412 19:59:24.915539 234625 cli_runner.go:164] Run: docker run --rm --name kindnet-20220412195202-42006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --entrypoint /usr/bin/test -v kindnet-20220412195202-42006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
I0412 19:59:25.504270 234625 oci.go:107] Successfully prepared a docker volume kindnet-20220412195202-42006
I0412 19:59:25.504323 234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0412 19:59:25.504354 234625 kic.go:179] Starting extracting preloaded images to volume ...
I0412 19:59:25.504427 234625 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220412195202-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
I0412 19:59:29.717968 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:32.218157 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:29.391612 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:31.392017 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:33.394289 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:33.135503 234625 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220412195202-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (7.630958725s)
I0412 19:59:33.135546 234625 kic.go:188] duration metric: took 7.631188 seconds to extract preloaded images to volume
W0412 19:59:33.135597 234625 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0412 19:59:33.135612 234625 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0412 19:59:33.135684 234625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0412 19:59:33.236242 234625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220412195202-42006 --name kindnet-20220412195202-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --network kindnet-20220412195202-42006 --ip 192.168.67.2 --volume kindnet-20220412195202-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
I0412 19:59:33.700774 234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Running}}
I0412 19:59:33.772841 234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
I0412 19:59:33.814140 234625 cli_runner.go:164] Run: docker exec kindnet-20220412195202-42006 stat /var/lib/dpkg/alternatives/iptables
I0412 19:59:33.885208 234625 oci.go:279] the created container "kindnet-20220412195202-42006" has a running status.
I0412 19:59:33.885243 234625 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa...
I0412 19:59:33.988927 234625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0412 19:59:34.095658 234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
I0412 19:59:34.154504 234625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0412 19:59:34.154534 234625 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220412195202-42006 chown docker:docker /home/docker/.ssh/authorized_keys]
I0412 19:59:34.265172 234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
I0412 19:59:34.303689 234625 machine.go:88] provisioning docker machine ...
I0412 19:59:34.303737 234625 ubuntu.go:169] provisioning hostname "kindnet-20220412195202-42006"
I0412 19:59:34.303791 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 19:59:34.717995 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:37.216943 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:35.892247 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:38.392656 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:34.342549 234625 main.go:134] libmachine: Using SSH client type: native
I0412 19:59:34.342769 234625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil> [] 0s} 127.0.0.1 49382 <nil> <nil>}
I0412 19:59:34.342791 234625 main.go:134] libmachine: About to run SSH command:
sudo hostname kindnet-20220412195202-42006 && echo "kindnet-20220412195202-42006" | sudo tee /etc/hostname
I0412 19:59:34.478710 234625 main.go:134] libmachine: SSH cmd err, output: <nil>: kindnet-20220412195202-42006
I0412 19:59:34.478797 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 19:59:34.514508 234625 main.go:134] libmachine: Using SSH client type: native
I0412 19:59:34.514696 234625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil> [] 0s} 127.0.0.1 49382 <nil> <nil>}
I0412 19:59:34.514729 234625 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\skindnet-20220412195202-42006' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220412195202-42006/g' /etc/hosts;
else
echo '127.0.1.1 kindnet-20220412195202-42006' | sudo tee -a /etc/hosts;
fi
fi
I0412 19:59:34.636254 234625 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0412 19:59:34.636282 234625 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
I0412 19:59:34.636301 234625 ubuntu.go:177] setting up certificates
I0412 19:59:34.636310 234625 provision.go:83] configureAuth start
I0412 19:59:34.636356 234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
I0412 19:59:34.670840 234625 provision.go:138] copyHostCerts
I0412 19:59:34.670908 234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
I0412 19:59:34.670921 234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
I0412 19:59:34.670988 234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
I0412 19:59:34.671081 234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
I0412 19:59:34.671096 234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
I0412 19:59:34.671123 234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
I0412 19:59:34.671173 234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
I0412 19:59:34.671181 234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
I0412 19:59:34.671204 234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
I0412 19:59:34.671242 234625 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220412195202-42006 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220412195202-42006]
I0412 19:59:34.782478 234625 provision.go:172] copyRemoteCerts
I0412 19:59:34.782544 234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0412 19:59:34.782579 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 19:59:34.817760 234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
I0412 19:59:34.906211 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0412 19:59:34.925349 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0412 19:59:34.947214 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0412 19:59:34.966787 234625 provision.go:86] duration metric: configureAuth took 330.462021ms
I0412 19:59:34.966815 234625 ubuntu.go:193] setting minikube options for container-runtime
I0412 19:59:34.967000 234625 config.go:178] Loaded profile config "kindnet-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0412 19:59:34.967013 234625 machine.go:91] provisioned docker machine in 663.294289ms
I0412 19:59:34.967019 234625 client.go:171] LocalClient.Create took 10.306388857s
I0412 19:59:34.967034 234625 start.go:173] duration metric: libmachine.API.Create for "kindnet-20220412195202-42006" took 10.306453895s
I0412 19:59:34.967049 234625 start.go:306] post-start starting for "kindnet-20220412195202-42006" (driver="docker")
I0412 19:59:34.967060 234625 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0412 19:59:34.967107 234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0412 19:59:34.967146 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 19:59:35.006426 234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
I0412 19:59:35.096908 234625 ssh_runner.go:195] Run: cat /etc/os-release
I0412 19:59:35.100043 234625 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0412 19:59:35.100113 234625 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0412 19:59:35.100132 234625 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0412 19:59:35.100141 234625 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0412 19:59:35.100154 234625 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
I0412 19:59:35.100216 234625 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
I0412 19:59:35.100289 234625 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
I0412 19:59:35.100388 234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0412 19:59:35.108243 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
I0412 19:59:35.128335 234625 start.go:309] post-start completed in 161.261633ms
I0412 19:59:35.128743 234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
I0412 19:59:35.163301 234625 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json ...
I0412 19:59:35.163570 234625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0412 19:59:35.163614 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 19:59:35.199687 234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
I0412 19:59:35.289368 234625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0412 19:59:35.293975 234625 start.go:134] duration metric: createHost completed in 10.636420263s
I0412 19:59:35.294008 234625 start.go:81] releasing machines lock for "kindnet-20220412195202-42006", held for 10.636608341s
I0412 19:59:35.294107 234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
I0412 19:59:35.329324 234625 ssh_runner.go:195] Run: systemctl --version
I0412 19:59:35.329391 234625 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0412 19:59:35.329396 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 19:59:35.329451 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 19:59:35.366712 234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
I0412 19:59:35.370262 234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
I0412 19:59:35.452540 234625 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0412 19:59:35.475848 234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0412 19:59:35.486091 234625 docker.go:183] disabling docker service ...
I0412 19:59:35.486153 234625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0412 19:59:35.503897 234625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0412 19:59:35.514103 234625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0412 19:59:35.602325 234625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0412 19:59:35.682686 234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0412 19:59:35.693997 234625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0412 19:59:35.709312 234625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
I0412 19:59:35.726756 234625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0412 19:59:35.734723 234625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0412 19:59:35.741966 234625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0412 19:59:35.855077 234625 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0412 19:59:35.927565 234625 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
I0412 19:59:35.927640 234625 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0412 19:59:35.931767 234625 start.go:462] Will wait 60s for crictl version
I0412 19:59:35.931829 234625 ssh_runner.go:195] Run: sudo crictl version
I0412 19:59:35.959625 234625 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-04-12T19:59:35Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0412 19:59:39.717113 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:41.718117 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:40.891783 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:42.892174 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:47.007016 234625 ssh_runner.go:195] Run: sudo crictl version
I0412 19:59:47.035718 234625 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.5.10
RuntimeApiVersion: v1alpha2
I0412 19:59:47.035789 234625 ssh_runner.go:195] Run: containerd --version
I0412 19:59:47.057937 234625 ssh_runner.go:195] Run: containerd --version
I0412 19:59:47.083583 234625 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
I0412 19:59:47.083694 234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0412 19:59:47.119300 234625 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I0412 19:59:47.122851 234625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0412 19:59:44.217319 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:46.717677 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:47.134888 234625 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0412 19:59:47.134973 234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
I0412 19:59:47.135033 234625 ssh_runner.go:195] Run: sudo crictl images --output json
I0412 19:59:47.161492 234625 containerd.go:607] all images are preloaded for containerd runtime.
I0412 19:59:47.161517 234625 containerd.go:521] Images already preloaded, skipping extraction
I0412 19:59:47.161562 234625 ssh_runner.go:195] Run: sudo crictl images --output json
I0412 19:59:47.186488 234625 containerd.go:607] all images are preloaded for containerd runtime.
I0412 19:59:47.186513 234625 cache_images.go:84] Images are preloaded, skipping loading
I0412 19:59:47.186577 234625 ssh_runner.go:195] Run: sudo crictl info
I0412 19:59:47.212894 234625 cni.go:93] Creating CNI manager for "kindnet"
I0412 19:59:47.212932 234625 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0412 19:59:47.212953 234625 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220412195202-42006 NodeName:kindnet-20220412195202-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0412 19:59:47.213114 234625 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "kindnet-20220412195202-42006"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.5
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0412 19:59:47.213218 234625 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kindnet-20220412195202-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
I0412 19:59:47.213284 234625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
I0412 19:59:47.221668 234625 binaries.go:44] Found k8s binaries, skipping transfer
I0412 19:59:47.221744 234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0412 19:59:47.229345 234625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (573 bytes)
I0412 19:59:47.244031 234625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0412 19:59:47.257717 234625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
I0412 19:59:47.271915 234625 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0412 19:59:47.275046 234625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0412 19:59:47.285681 234625 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006 for IP: 192.168.67.2
I0412 19:59:47.285815 234625 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
I0412 19:59:47.285882 234625 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
I0412 19:59:47.285948 234625 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key
I0412 19:59:47.285980 234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt with IP's: []
I0412 19:59:47.707380 234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt ...
I0412 19:59:47.707423 234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt: {Name:mk5059b3c4fae947bb1fc99c8693ca8f2b5e9668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 19:59:47.707679 234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key ...
I0412 19:59:47.707699 234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key: {Name:mk6c27fac79f3772ad8e270e49ba33e4795e15de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 19:59:47.707842 234625 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e
I0412 19:59:47.707864 234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0412 19:59:47.835182 234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e ...
I0412 19:59:47.835214 234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e: {Name:mk9e6b042dbd3040132f0c6e4fc317c376013de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 19:59:47.835433 234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e ...
I0412 19:59:47.835450 234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e: {Name:mk0670b8a49acf77375ca4180f2f6a38616b9c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 19:59:47.835571 234625 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt
I0412 19:59:47.835658 234625 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key
I0412 19:59:47.835719 234625 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key
I0412 19:59:47.835740 234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt with IP's: []
I0412 19:59:48.032648 234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt ...
I0412 19:59:48.032682 234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt: {Name:mk528ca3c8cae5bc77058b8b0d4389c64b0ac73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 19:59:48.032906 234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key ...
I0412 19:59:48.032923 234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key: {Name:mkcae74fa4c12fae2d02c0880924d829f627972c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 19:59:48.033184 234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
W0412 19:59:48.033241 234625 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
I0412 19:59:48.033258 234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
I0412 19:59:48.033316 234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
I0412 19:59:48.033350 234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
I0412 19:59:48.033383 234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
I0412 19:59:48.033438 234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
I0412 19:59:48.034144 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0412 19:59:48.055187 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0412 19:59:48.075056 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0412 19:59:48.095916 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0412 19:59:48.116341 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0412 19:59:48.135114 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0412 19:59:48.154103 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0412 19:59:48.173233 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0412 19:59:48.192800 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0412 19:59:48.212546 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
I0412 19:59:48.233026 234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
I0412 19:59:48.251632 234625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0412 19:59:48.266099 234625 ssh_runner.go:195] Run: openssl version
I0412 19:59:48.271402 234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
I0412 19:59:48.279695 234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
I0412 19:59:48.283066 234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
I0412 19:59:48.283119 234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
I0412 19:59:48.288470 234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
I0412 19:59:48.296579 234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0412 19:59:48.305946 234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0412 19:59:48.309733 234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
I0412 19:59:48.309797 234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0412 19:59:48.315491 234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0412 19:59:48.323461 234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
I0412 19:59:48.331682 234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
I0412 19:59:48.335099 234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
I0412 19:59:48.335158 234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
I0412 19:59:48.340576 234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
I0412 19:59:48.348569 234625 kubeadm.go:391] StartCluster: {Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0412 19:59:48.348663 234625 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0412 19:59:48.348705 234625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0412 19:59:48.373690 234625 cri.go:87] found id: ""
I0412 19:59:48.373763 234625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0412 19:59:48.381689 234625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0412 19:59:48.390331 234625 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0412 19:59:48.390395 234625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0412 19:59:48.398073 234625 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0412 19:59:48.398143 234625 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0412 19:59:44.892323 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:47.391500 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:48.676509 234625 out.go:203] - Generating certificates and keys ...
I0412 19:59:48.717877 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:51.218596 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:49.892672 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:52.391793 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:51.433028 234625 out.go:203] - Booting up control plane ...
I0412 19:59:53.718296 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:56.217829 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 19:59:54.392132 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:56.392172 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:58.392867 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 19:59:58.218169 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:00.717695 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:02.717883 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:03.478717 234625 out.go:203] - Configuring RBAC rules ...
I0412 20:00:03.893499 234625 cni.go:93] Creating CNI manager for "kindnet"
I0412 20:00:00.892434 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:03.392852 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:03.895818 234625 out.go:176] * Configuring CNI (Container Networking Interface) ...
I0412 20:00:03.895907 234625 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0412 20:00:03.899812 234625 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
I0412 20:00:03.899838 234625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0412 20:00:03.913929 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0412 20:00:05.219805 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:07.717338 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:05.892834 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:07.893420 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:04.692692 234625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0412 20:00:04.692766 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:04.692774 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=kindnet-20220412195202-42006 minikube.k8s.io/updated_at=2022_04_12T20_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:04.786887 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:04.786942 234625 ops.go:34] apiserver oom_adj: -16
I0412 20:00:05.348474 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:05.848261 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:06.347958 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:06.848142 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:07.348534 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:07.848181 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:08.348252 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:08.848242 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:09.717617 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:12.217985 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:10.392569 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:12.893358 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:09.348718 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:09.848435 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:10.348189 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:10.848205 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:11.348276 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:11.847965 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:12.348072 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:12.848241 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:13.348206 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:13.847960 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:14.348831 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:14.848686 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:15.348733 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:15.847949 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:16.348332 234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0412 20:00:16.422308 234625 kubeadm.go:1020] duration metric: took 11.729581193s to wait for elevateKubeSystemPrivileges.
I0412 20:00:16.422402 234625 kubeadm.go:393] StartCluster complete in 28.073846211s
I0412 20:00:16.422430 234625 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 20:00:16.422559 234625 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0412 20:00:16.424828 234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 20:00:16.945845 234625 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220412195202-42006" rescaled to 1
I0412 20:00:16.945920 234625 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0412 20:00:16.947880 234625 out.go:176] * Verifying Kubernetes components...
I0412 20:00:16.947946 234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0412 20:00:16.945962 234625 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0412 20:00:16.946039 234625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0412 20:00:16.946209 234625 config.go:178] Loaded profile config "kindnet-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
I0412 20:00:16.948060 234625 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220412195202-42006"
I0412 20:00:16.948137 234625 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220412195202-42006"
I0412 20:00:16.948148 234625 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220412195202-42006"
I0412 20:00:16.948171 234625 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220412195202-42006"
W0412 20:00:16.948152 234625 addons.go:165] addon storage-provisioner should already be in state true
I0412 20:00:16.948301 234625 host.go:66] Checking if "kindnet-20220412195202-42006" exists ...
I0412 20:00:16.948605 234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
I0412 20:00:16.948824 234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
I0412 20:00:16.994055 234625 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0412 20:00:16.994187 234625 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0412 20:00:16.994201 234625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0412 20:00:16.994256 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 20:00:16.996443 234625 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220412195202-42006"
W0412 20:00:16.996486 234625 addons.go:165] addon default-storageclass should already be in state true
I0412 20:00:16.996527 234625 host.go:66] Checking if "kindnet-20220412195202-42006" exists ...
I0412 20:00:16.997174 234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
I0412 20:00:17.030079 234625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0412 20:00:17.031701 234625 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220412195202-42006" to be "Ready" ...
I0412 20:00:17.035075 234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
I0412 20:00:17.041458 234625 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0412 20:00:17.041486 234625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0412 20:00:17.041543 234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
I0412 20:00:17.080438 234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
I0412 20:00:17.193530 234625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0412 20:00:17.195131 234625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0412 20:00:17.294685 234625 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
I0412 20:00:14.717593 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:16.717777 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:15.391553 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:17.393407 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:17.612049 234625 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
I0412 20:00:17.612127 234625 addons.go:417] enableAddons completed in 666.177991ms
I0412 20:00:19.038275 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:18.717902 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:21.217896 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:19.892385 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:22.391892 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:21.038649 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:23.538578 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:23.717571 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:26.217565 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:24.392661 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:26.891680 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:28.892137 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:26.038437 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:28.538627 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:28.717803 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:31.217481 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:31.391697 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:33.891480 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:31.038447 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:33.538307 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:33.717464 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:36.217669 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:35.893182 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:38.392498 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:35.538917 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:38.038927 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:38.717159 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:40.717855 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:40.891596 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:42.892711 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:40.538521 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:42.540527 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:43.217256 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:45.217852 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:47.717122 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:45.391842 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:47.891765 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:45.038334 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:47.038391 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:49.717797 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:52.217675 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:49.892024 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:51.892307 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:49.538324 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:51.538974 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:54.038323 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:54.717564 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:57.217842 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:00:54.391535 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:56.392469 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:58.892155 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:00:56.038611 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:58.539241 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:00:59.717546 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:01.718124 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:01.391754 215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:02.897309 215682 pod_ready.go:81] duration metric: took 4m0.072489065s waiting for pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace to be "Ready" ...
E0412 20:01:02.897340 215682 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0412 20:01:02.897351 215682 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-rp9nw" in "kube-system" namespace to be "Ready" ...
I0412 20:01:01.038645 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:03.038739 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:04.217482 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:06.717647 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:04.910641 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:07.409695 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:05.039226 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:07.538297 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:08.717806 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:11.217926 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:09.411216 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:11.910496 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:09.538495 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:11.538805 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:14.038511 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:13.716895 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:15.717087 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:17.717899 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:14.409641 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:16.409674 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:18.409945 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:16.038744 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:18.539026 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:20.217613 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:22.217978 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:20.409978 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:22.410212 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:21.039163 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:23.538809 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:24.717538 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:26.718036 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:24.910097 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:26.911338 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:25.538960 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:27.539080 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:29.217786 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:31.717219 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:29.409357 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:31.410241 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:33.909935 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:30.038178 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:32.038980 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:34.217387 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:36.717576 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:36.410475 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:38.910091 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:34.538822 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:37.038790 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:39.217155 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:41.717921 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:41.409568 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:43.410362 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:39.538195 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:41.538778 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:44.038722 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:44.217153 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:46.217484 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:45.410662 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:47.909438 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:46.539146 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:49.038295 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:48.217798 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:50.217902 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:52.718052 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:49.910205 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:52.409746 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:51.038758 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:53.039071 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:55.217066 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:57.217692 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:01:54.410475 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:56.910213 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:58.910650 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:01:55.539116 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:58.038934 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:01:59.717349 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:01.718045 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:01.409592 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:03.410035 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:00.039044 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:02.539085 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:04.217477 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:06.217876 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:05.910262 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:08.409777 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:05.039182 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:07.538476 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:08.717347 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:10.717910 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:10.410013 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:12.410056 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:09.538679 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:12.038785 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:14.038818 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:13.218046 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:15.717348 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:14.910778 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:16.911702 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:16.538735 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:19.038825 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:18.217618 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:20.717594 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:22.717754 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:19.409449 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:21.410665 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:23.910445 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:21.039094 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:23.538266 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:25.217365 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:27.717700 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:25.910686 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:28.409680 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:25.539402 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:28.039025 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:30.217534 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:32.717420 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:30.909521 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:32.910092 215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
I0412 20:02:30.538544 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:33.038931 234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
I0412 20:02:34.717695 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:36.717943 200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
I0412 20:02:37.220203 200789 node_ready.go:38] duration metric: took 4m0.0098666s waiting for node "pause-20220412195428-42006" to be "Ready" ...
I0412 20:02:37.222618 200789 out.go:176]
W0412 20:02:37.222763 200789 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0412 20:02:37.222775 200789 out.go:241] *
W0412 20:02:37.223467 200789 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
b08f5ef3bae50 6de166512aa22 About a minute ago Running kindnet-cni 1 a1e50dee04f41
bde184ab19256 6de166512aa22 4 minutes ago Exited kindnet-cni 0 a1e50dee04f41
770f07872e71d 3c53fa8541f95 4 minutes ago Running kube-proxy 0 e6d238531ecc9
a5102a9c6c188 884d49d6d8c9f 4 minutes ago Running kube-scheduler 0 9bc604b175965
12297e4242865 3fc1d62d65872 4 minutes ago Running kube-apiserver 0 86f3034b1ab0c
ec3584dd3bc99 b0c9e5e4dbb14 4 minutes ago Running kube-controller-manager 0 148ffee7343df
ab8e5cd14558c 25f8c7f3da61c 4 minutes ago Running etcd 0 c5b276b6c7036
*
* ==> containerd <==
* -- Logs begin at Tue 2022-04-12 19:58:00 UTC, end at Tue 2022-04-12 20:02:38 UTC. --
Apr 12 19:58:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:17.986386893Z" level=info msg="StartContainer for \"ec3584dd3bc996c2e709a0d7c44be4c02546fcd782f152d395deb6d890efa53b\" returns successfully"
Apr 12 19:58:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:17.986427490Z" level=info msg="StartContainer for \"12297e42428653f65289acbe7149d83b7948bcaef5f91622ac1b42b6cff89754\" returns successfully"
Apr 12 19:58:35 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:35.789614369Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Apr 12 19:58:36 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:36.943408223Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-mkwvw,Uid:4d1d82b2-0635-445b-8f4b-862f04d00f43,Namespace:kube-system,Attempt:0,}"
Apr 12 19:58:36 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:36.943933976Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-bc6md,Uid:64b79d06-dc7d-4efd-b7d7-89cdc366440f,Namespace:kube-system,Attempt:0,}"
Apr 12 19:58:36 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:36.965269922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3 pid=1973
Apr 12 19:58:36 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:36.966395681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6d238531ecc918bd10620565de3f3202d87ded47d6d6b535a10f79ed7588281 pid=1982
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.038140310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mkwvw,Uid:4d1d82b2-0635-445b-8f4b-862f04d00f43,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6d238531ecc918bd10620565de3f3202d87ded47d6d6b535a10f79ed7588281\""
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.041003708Z" level=info msg="CreateContainer within sandbox \"e6d238531ecc918bd10620565de3f3202d87ded47d6d6b535a10f79ed7588281\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.057550796Z" level=info msg="CreateContainer within sandbox \"e6d238531ecc918bd10620565de3f3202d87ded47d6d6b535a10f79ed7588281\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"770f07872e71d6d38b13569ebee277110b4fab80e2db256bf4bef5989eb88ef7\""
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.058233627Z" level=info msg="StartContainer for \"770f07872e71d6d38b13569ebee277110b4fab80e2db256bf4bef5989eb88ef7\""
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.185038647Z" level=info msg="StartContainer for \"770f07872e71d6d38b13569ebee277110b4fab80e2db256bf4bef5989eb88ef7\" returns successfully"
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.284785276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-bc6md,Uid:64b79d06-dc7d-4efd-b7d7-89cdc366440f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\""
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.288309053Z" level=info msg="CreateContainer within sandbox \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.304897861Z" level=info msg="CreateContainer within sandbox \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff\""
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.305523966Z" level=info msg="StartContainer for \"bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff\""
Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.599360566Z" level=info msg="StartContainer for \"bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff\" returns successfully"
Apr 12 20:01:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:17.828601256Z" level=info msg="shim disconnected" id=bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff
Apr 12 20:01:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:17.828671599Z" level=warning msg="cleaning up after shim disconnected" id=bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff namespace=k8s.io
Apr 12 20:01:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:17.828683294Z" level=info msg="cleaning up dead shim"
Apr 12 20:01:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:17.839643364Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:01:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2317\n"
Apr 12 20:01:18 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:18.706913410Z" level=info msg="CreateContainer within sandbox \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
Apr 12 20:01:18 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:18.723493956Z" level=info msg="CreateContainer within sandbox \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"b08f5ef3bae5064ce05e3300f832dc204db6541b93779b8153ce918133be9ee5\""
Apr 12 20:01:18 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:18.724570525Z" level=info msg="StartContainer for \"b08f5ef3bae5064ce05e3300f832dc204db6541b93779b8153ce918133be9ee5\""
Apr 12 20:01:18 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:18.884198752Z" level=info msg="StartContainer for \"b08f5ef3bae5064ce05e3300f832dc204db6541b93779b8153ce918133be9ee5\" returns successfully"
*
* ==> describe nodes <==
* Name: pause-20220412195428-42006
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220412195428-42006
kubernetes.io/os=linux
minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
minikube.k8s.io/name=pause-20220412195428-42006
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_04_12T19_58_25_0700
minikube.k8s.io/version=v1.25.2
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 12 Apr 2022 19:58:20 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: pause-20220412195428-42006
AcquireTime: <unset>
RenewTime: Tue, 12 Apr 2022 20:02:28 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 12 Apr 2022 19:58:34 +0000 Tue, 12 Apr 2022 19:58:18 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 12 Apr 2022 19:58:34 +0000 Tue, 12 Apr 2022 19:58:18 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 12 Apr 2022 19:58:34 +0000 Tue, 12 Apr 2022 19:58:18 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Tue, 12 Apr 2022 19:58:34 +0000 Tue, 12 Apr 2022 19:58:18 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.76.2
Hostname: pause-20220412195428-42006
Capacity:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873828Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873828Ki
pods: 110
System Info:
Machine ID: 140a143b31184b58be947b52a01fff83
System UUID: f7bfddc0-fa9a-494c-85f2-66b8e6c42fb6
Boot ID: 16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
Kernel Version: 5.13.0-1023-gcp
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.5.10
Kubelet Version: v1.23.5
Kube-Proxy Version: v1.23.5
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-pause-20220412195428-42006 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 4m14s
kube-system kindnet-bc6md 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 4m2s
kube-system kube-apiserver-pause-20220412195428-42006 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m14s
kube-system kube-controller-manager-pause-20220412195428-42006 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m15s
kube-system kube-proxy-mkwvw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m2s
kube-system kube-scheduler-pause-20220412195428-42006 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m14s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 100m (1%!)(MISSING)
memory 150Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m1s kube-proxy
Normal Starting 4m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m14s kubelet Node pause-20220412195428-42006 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m14s kubelet Node pause-20220412195428-42006 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m14s kubelet Node pause-20220412195428-42006 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m14s kubelet Updated Node Allocatable limit across pods
*
* ==> dmesg <==
* [ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +2.947870] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +1.019798] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +1.023930] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +17.927324] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +1.019424] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +1.019947] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[Apr12 20:02] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +1.007834] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +1.023920] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +2.967928] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +1.031787] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
[ +1.027962] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
*
* ==> etcd [ab8e5cd14558cb29546efe15f9215efe57017c2193e7f6646140863c3dee6124] <==
* {"level":"info","ts":"2022-04-12T19:58:18.090Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
{"level":"info","ts":"2022-04-12T19:58:18.318Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:58:18.319Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:58:18.319Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-04-12T19:58:18.321Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-04-12T19:58:18.321Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
{"level":"info","ts":"2022-04-12T19:58:18.322Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-20220412195428-42006 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
{"level":"info","ts":"2022-04-12T19:59:32.440Z","caller":"traceutil/trace.go:171","msg":"trace[803619546] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"109.877762ms","start":"2022-04-12T19:59:32.330Z","end":"2022-04-12T19:59:32.440Z","steps":["trace[803619546] 'process raft request' (duration: 61.958105ms)","trace[803619546] 'compare' (duration: 47.821616ms)"],"step_count":2}
*
* ==> kernel <==
* 20:02:38 up 2:45, 0 users, load average: 0.28, 1.70, 2.02
Linux pause-20220412195428-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [12297e42428653f65289acbe7149d83b7948bcaef5f91622ac1b42b6cff89754] <==
* I0412 19:58:20.883314 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0412 19:58:20.883331 1 shared_informer.go:247] Caches are synced for crd-autoregister
I0412 19:58:20.885171 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0412 19:58:20.885186 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0412 19:58:20.900979 1 controller.go:611] quota admission added evaluator for: namespaces
I0412 19:58:20.903622 1 shared_informer.go:247] Caches are synced for node_authorizer
I0412 19:58:21.737857 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0412 19:58:21.737886 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0412 19:58:21.742601 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
I0412 19:58:21.745735 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
I0412 19:58:21.745757 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
I0412 19:58:22.163015 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0412 19:58:22.205407 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0412 19:58:22.332552 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0412 19:58:22.340173 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0412 19:58:22.341375 1 controller.go:611] quota admission added evaluator for: endpoints
I0412 19:58:22.345504 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0412 19:58:22.910532 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0412 19:58:24.017738 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0412 19:58:24.027682 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0412 19:58:24.037816 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0412 19:58:24.286568 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0412 19:58:36.614124 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I0412 19:58:36.665133 1 controller.go:611] quota admission added evaluator for: replicasets.apps
I0412 19:58:37.317616 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
*
* ==> kube-controller-manager [ec3584dd3bc996c2e709a0d7c44be4c02546fcd782f152d395deb6d890efa53b] <==
* I0412 19:58:35.962032 1 shared_informer.go:247] Caches are synced for daemon sets
I0412 19:58:35.962192 1 shared_informer.go:247] Caches are synced for endpoint
I0412 19:58:35.964587 1 shared_informer.go:247] Caches are synced for job
I0412 19:58:35.966551 1 shared_informer.go:247] Caches are synced for resource quota
I0412 19:58:35.966551 1 shared_informer.go:247] Caches are synced for ephemeral
I0412 19:58:35.967517 1 event.go:294] "Event occurred" object="kube-system/etcd-pause-20220412195428-42006" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0412 19:58:35.969672 1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-pause-20220412195428-42006" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0412 19:58:35.970876 1 shared_informer.go:247] Caches are synced for resource quota
I0412 19:58:35.971816 1 shared_informer.go:247] Caches are synced for GC
I0412 19:58:35.986622 1 shared_informer.go:247] Caches are synced for HPA
I0412 19:58:35.995901 1 shared_informer.go:247] Caches are synced for stateful set
I0412 19:58:36.012836 1 shared_informer.go:247] Caches are synced for deployment
I0412 19:58:36.012924 1 shared_informer.go:247] Caches are synced for attach detach
I0412 19:58:36.015198 1 shared_informer.go:247] Caches are synced for disruption
I0412 19:58:36.015220 1 disruption.go:371] Sending events to api server.
I0412 19:58:36.388338 1 shared_informer.go:247] Caches are synced for garbage collector
I0412 19:58:36.411463 1 shared_informer.go:247] Caches are synced for garbage collector
I0412 19:58:36.411493 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0412 19:58:36.620006 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mkwvw"
I0412 19:58:36.621916 1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bc6md"
I0412 19:58:36.667214 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
I0412 19:58:36.686526 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
I0412 19:58:36.767012 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9vvk4"
I0412 19:58:36.771633 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-gc8l7"
I0412 19:58:36.793349 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-9vvk4"
*
* ==> kube-proxy [770f07872e71d6d38b13569ebee277110b4fab80e2db256bf4bef5989eb88ef7] <==
* I0412 19:58:37.228735 1 node.go:163] Successfully retrieved node IP: 192.168.76.2
I0412 19:58:37.228819 1 server_others.go:138] "Detected node IP" address="192.168.76.2"
I0412 19:58:37.228864 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0412 19:58:37.313916 1 server_others.go:206] "Using iptables Proxier"
I0412 19:58:37.313955 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0412 19:58:37.313967 1 server_others.go:214] "Creating dualStackProxier for iptables"
I0412 19:58:37.313997 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0412 19:58:37.314465 1 server.go:656] "Version info" version="v1.23.5"
I0412 19:58:37.315515 1 config.go:226] "Starting endpoint slice config controller"
I0412 19:58:37.315539 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0412 19:58:37.315562 1 config.go:317] "Starting service config controller"
I0412 19:58:37.315567 1 shared_informer.go:240] Waiting for caches to sync for service config
I0412 19:58:37.416490 1 shared_informer.go:247] Caches are synced for service config
I0412 19:58:37.416525 1 shared_informer.go:247] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [a5102a9c6c18880f279c76b5b41a685ac2be3dca5038c7565237cec6b8c986b9] <==
* W0412 19:58:20.899632 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0412 19:58:20.899646 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0412 19:58:20.900156 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0412 19:58:20.900258 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0412 19:58:20.900376 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0412 19:58:20.900403 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0412 19:58:20.900468 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0412 19:58:20.900489 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0412 19:58:20.900769 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0412 19:58:20.900795 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0412 19:58:21.713827 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0412 19:58:21.713866 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0412 19:58:21.729292 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0412 19:58:21.729328 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0412 19:58:21.733510 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0412 19:58:21.733557 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0412 19:58:21.765763 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0412 19:58:21.765798 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0412 19:58:21.788461 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0412 19:58:21.788510 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0412 19:58:21.847302 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0412 19:58:21.847341 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0412 19:58:21.910933 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0412 19:58:21.910985 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
I0412 19:58:22.492327 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Tue 2022-04-12 19:58:00 UTC, end at Tue 2022-04-12 20:02:38 UTC. --
Apr 12 20:00:39 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:39.659836 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:00:44 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:44.660928 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:00:49 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:49.661798 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:00:54 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:54.662747 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:00:59 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:59.664392 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:04 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:04.665445 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:09 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:09.666793 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:14 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:14.668163 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:18 pause-20220412195428-42006 kubelet[1538]: I0412 20:01:18.704743 1538 scope.go:110] "RemoveContainer" containerID="bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff"
Apr 12 20:01:19 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:19.669922 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:24 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:24.670800 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:29 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:29.672355 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:34 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:34.673354 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:39 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:39.675128 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:44 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:44.676332 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:49 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:49.677013 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:54 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:54.678718 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:01:59 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:59.680439 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:02:04 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:04.681729 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:02:09 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:09.682646 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:02:14 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:14.683977 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:02:19 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:19.685022 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:02:24 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:24.686236 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:02:29 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:29.687429 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Apr 12 20:02:34 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:34.688474 1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20220412195428-42006 -n pause-20220412195428-42006
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220412195428-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-gc8l7
helpers_test.go:272: ======> post-mortem[TestPause/serial/Start]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220412195428-42006 describe pod coredns-64897985d-gc8l7
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220412195428-42006 describe pod coredns-64897985d-gc8l7: exit status 1 (61.167307ms)
** stderr **
Error from server (NotFound): pods "coredns-64897985d-gc8l7" not found
** /stderr **
helpers_test.go:277: kubectl --context pause-20220412195428-42006 describe pod coredns-64897985d-gc8l7: exit status 1
--- FAIL: TestPause/serial/Start (491.32s)