=== RUN TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run: out/minikube-linux-amd64 start -p dockerenv-198309 --driver=docker --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-198309 --driver=docker --container-runtime=containerd: (23.934593947s)
docker_test.go:189: (dbg) Run: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-198309"
docker_test.go:220: (dbg) Run: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0ktuxNATN9dN/agent.596147" SSH_AGENT_PID="596148" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version"
docker_test.go:220: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0ktuxNATN9dN/agent.596147" SSH_AGENT_PID="596148" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version": exit status 1 (215.515436ms)
-- stdout --
Client: Docker Engine - Community
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:32:12 2023
OS/Arch: linux/amd64
Context: default
-- /stdout --
** stderr **
error during connect: Get "http://docker.example.com/v1.24/version": command [ssh -o ConnectTimeout=30 -l docker -p 32777 -- 127.0.0.1 docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:uHVI9dczVaXYDPQmqb3AGbbRvPX6uVfEX2VBipP8SXw.
Please contact your system administrator.
Add correct host key in /home/jenkins/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/jenkins/.ssh/known_hosts:5
remove with:
ssh-keygen -f "/home/jenkins/.ssh/known_hosts" -R "[127.0.0.1]:32777"
RSA host key for [127.0.0.1]:32777 has changed and you have requested strict checking.
Host key verification failed.
** /stderr **
docker_test.go:222: failed to execute 'docker version', error: exit status 1, output:
-- stdout --
Client: Docker Engine - Community
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:32:12 2023
OS/Arch: linux/amd64
Context: default
-- /stdout --
** stderr **
error during connect: Get "http://docker.example.com/v1.24/version": command [ssh -o ConnectTimeout=30 -l docker -p 32777 -- 127.0.0.1 docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:uHVI9dczVaXYDPQmqb3AGbbRvPX6uVfEX2VBipP8SXw.
Please contact your system administrator.
Add correct host key in /home/jenkins/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/jenkins/.ssh/known_hosts:5
remove with:
ssh-keygen -f "/home/jenkins/.ssh/known_hosts" -R "[127.0.0.1]:32777"
RSA host key for [127.0.0.1]:32777 has changed and you have requested strict checking.
Host key verification failed.
** /stderr **
panic.go:522: *** TestDockerEnvContainerd FAILED at 2023-09-06 23:42:23.049871857 +0000 UTC m=+287.281237186
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect dockerenv-198309
helpers_test.go:235: (dbg) docker inspect dockerenv-198309:
-- stdout --
[
{
"Id": "789bbaaf7800124f19ea76432309e9b801ab95f3d503a669164157ed9544a919",
"Created": "2023-09-06T23:41:54.004586393Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 594056,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-09-06T23:41:54.285904185Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:a5b1b95d50f24b5df6a9115c9ada0cb74f27ed4b03c4761eb60ee23f0bdd5210",
"ResolvConfPath": "/var/lib/docker/containers/789bbaaf7800124f19ea76432309e9b801ab95f3d503a669164157ed9544a919/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/789bbaaf7800124f19ea76432309e9b801ab95f3d503a669164157ed9544a919/hostname",
"HostsPath": "/var/lib/docker/containers/789bbaaf7800124f19ea76432309e9b801ab95f3d503a669164157ed9544a919/hosts",
"LogPath": "/var/lib/docker/containers/789bbaaf7800124f19ea76432309e9b801ab95f3d503a669164157ed9544a919/789bbaaf7800124f19ea76432309e9b801ab95f3d503a669164157ed9544a919-json.log",
"Name": "/dockerenv-198309",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"dockerenv-198309:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "dockerenv-198309",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 8388608000,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 16777216000,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/8e288127fd08c636046e5f1ed15ab3f7df567116801f248674d9b76dce50be90-init/diff:/var/lib/docker/overlay2/618880dca05f65eb24170abee02c12007f357b2a80689bfc3ba8a731d6572e38/diff",
"MergedDir": "/var/lib/docker/overlay2/8e288127fd08c636046e5f1ed15ab3f7df567116801f248674d9b76dce50be90/merged",
"UpperDir": "/var/lib/docker/overlay2/8e288127fd08c636046e5f1ed15ab3f7df567116801f248674d9b76dce50be90/diff",
"WorkDir": "/var/lib/docker/overlay2/8e288127fd08c636046e5f1ed15ab3f7df567116801f248674d9b76dce50be90/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "dockerenv-198309",
"Source": "/var/lib/docker/volumes/dockerenv-198309/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "dockerenv-198309",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "dockerenv-198309",
"name.minikube.sigs.k8s.io": "dockerenv-198309",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c2e57019918d48db6fa2cae7f9f25272c0d81ad25fca24917437e4a41cd224ba",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32777"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32776"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32773"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32775"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32774"
}
]
},
"SandboxKey": "/var/run/docker/netns/c2e57019918d",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"dockerenv-198309": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": [
"789bbaaf7800",
"dockerenv-198309"
],
"NetworkID": "51e9fa87ef7087fe32e90aa6418bd0e8e7960993eb030540a1d90b9703bf62e4",
"EndpointID": "51a4072fca342ac7d09baf41d7c2604039d6a97052953b08e7d3145443019c4c",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:31:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-198309 -n dockerenv-198309
helpers_test.go:244: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p dockerenv-198309 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p dockerenv-198309 logs -n 25: (1.353075198s)
helpers_test.go:252: TestDockerEnvContainerd logs:
-- stdout --
*
* ==> Audit <==
* |------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
| delete | -p download-docker-757255 | download-docker-757255 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
| start | --download-only -p | binary-mirror-252087 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | |
| | binary-mirror-252087 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:46029 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p binary-mirror-252087 | binary-mirror-252087 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:38 UTC |
| start | -p addons-626363 | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:38 UTC | 06 Sep 23 23:40 UTC |
| | --wait=true --memory=4000 | | | | | |
| | --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=helm-tiller | | | | | |
| addons | enable headlamp | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | -p addons-626363 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | disable cloud-spanner -p | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | addons-626363 | | | | | |
| addons | addons-626363 addons disable | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | helm-tiller --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ip | addons-626363 ip | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| addons | addons-626363 addons disable | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-626363 addons | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-626363 ssh curl -s | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| addons | disable inspektor-gadget -p | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | addons-626363 | | | | | |
| ip | addons-626363 ip | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| addons | addons-626363 addons disable | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | ingress-dns --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-626363 addons disable | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:40 UTC | 06 Sep 23 23:40 UTC |
| | ingress --alsologtostderr -v=1 | | | | | |
| addons | addons-626363 addons | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-626363 addons | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-626363 addons disable | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| stop | -p addons-626363 | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
| addons | enable dashboard -p | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
| | addons-626363 | | | | | |
| addons | disable dashboard -p | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
| | addons-626363 | | | | | |
| addons | disable gvisor -p | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
| | addons-626363 | | | | | |
| delete | -p addons-626363 | addons-626363 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:41 UTC |
| start | -p dockerenv-198309 | dockerenv-198309 | jenkins | v1.31.2 | 06 Sep 23 23:41 UTC | 06 Sep 23 23:42 UTC |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| docker-env | --ssh-host --ssh-add -p | dockerenv-198309 | jenkins | v1.31.2 | 06 Sep 23 23:42 UTC | 06 Sep 23 23:42 UTC |
| | dockerenv-198309 | | | | | |
|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/09/06 23:41:48
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.20.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0906 23:41:48.081670 593434 out.go:296] Setting OutFile to fd 1 ...
I0906 23:41:48.081920 593434 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:41:48.081924 593434 out.go:309] Setting ErrFile to fd 2...
I0906 23:41:48.081927 593434 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0906 23:41:48.082119 593434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17174-571027/.minikube/bin
I0906 23:41:48.082684 593434 out.go:303] Setting JSON to false
I0906 23:41:48.084143 593434 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":19181,"bootTime":1694024527,"procs":486,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0906 23:41:48.084196 593434 start.go:138] virtualization: kvm guest
I0906 23:41:48.086525 593434 out.go:177] * [dockerenv-198309] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0906 23:41:48.088417 593434 out.go:177] - MINIKUBE_LOCATION=17174
I0906 23:41:48.089659 593434 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0906 23:41:48.088488 593434 notify.go:220] Checking for updates...
I0906 23:41:48.092033 593434 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17174-571027/kubeconfig
I0906 23:41:48.093247 593434 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17174-571027/.minikube
I0906 23:41:48.094553 593434 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0906 23:41:48.095769 593434 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0906 23:41:48.097304 593434 driver.go:373] Setting default libvirt URI to qemu:///system
I0906 23:41:48.119819 593434 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
I0906 23:41:48.119904 593434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0906 23:41:48.171264 593434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:37 SystemTime:2023-09-06 23:41:48.162785021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648066560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0906 23:41:48.171350 593434 docker.go:294] overlay module found
I0906 23:41:48.173260 593434 out.go:177] * Using the docker driver based on user configuration
I0906 23:41:48.174637 593434 start.go:298] selected driver: docker
I0906 23:41:48.174644 593434 start.go:902] validating driver "docker" against <nil>
I0906 23:41:48.174655 593434 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0906 23:41:48.174752 593434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0906 23:41:48.225127 593434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:37 SystemTime:2023-09-06 23:41:48.216902535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1040-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648066560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0906 23:41:48.225287 593434 start_flags.go:307] no existing cluster config was found, will generate one from the flags
I0906 23:41:48.225774 593434 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
I0906 23:41:48.225901 593434 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
I0906 23:41:48.227726 593434 out.go:177] * Using Docker driver with root privileges
I0906 23:41:48.228999 593434 cni.go:84] Creating CNI manager for ""
I0906 23:41:48.229008 593434 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0906 23:41:48.229019 593434 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
I0906 23:41:48.229026 593434 start_flags.go:321] config:
{Name:dockerenv-198309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:dockerenv-198309 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0906 23:41:48.230512 593434 out.go:177] * Starting control plane node dockerenv-198309 in cluster dockerenv-198309
I0906 23:41:48.231714 593434 cache.go:122] Beginning downloading kic base image for docker with containerd
I0906 23:41:48.233008 593434 out.go:177] * Pulling base image ...
I0906 23:41:48.234177 593434 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
I0906 23:41:48.234211 593434 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17174-571027/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4
I0906 23:41:48.234217 593434 cache.go:57] Caching tarball of preloaded images
I0906 23:41:48.234265 593434 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon
I0906 23:41:48.234290 593434 preload.go:174] Found /home/jenkins/minikube-integration/17174-571027/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0906 23:41:48.234299 593434 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on containerd
I0906 23:41:48.234636 593434 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/config.json ...
I0906 23:41:48.234651 593434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/config.json: {Name:mkc9ba429173547393e1e7b339f91951e4bc93b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:41:48.252826 593434 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b in local docker daemon, skipping pull
I0906 23:41:48.252854 593434 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b exists in daemon, skipping load
I0906 23:41:48.252880 593434 cache.go:195] Successfully downloaded all kic artifacts
I0906 23:41:48.252912 593434 start.go:365] acquiring machines lock for dockerenv-198309: {Name:mk9a0bbbfa37f973993da230a5a7c99881dc8132 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0906 23:41:48.253014 593434 start.go:369] acquired machines lock for "dockerenv-198309" in 87.716µs
I0906 23:41:48.253033 593434 start.go:93] Provisioning new machine with config: &{Name:dockerenv-198309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:dockerenv-198309 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0906 23:41:48.253139 593434 start.go:125] createHost starting for "" (driver="docker")
I0906 23:41:48.255060 593434 out.go:204] * Creating docker container (CPUs=2, Memory=8000MB) ...
I0906 23:41:48.255326 593434 start.go:159] libmachine.API.Create for "dockerenv-198309" (driver="docker")
I0906 23:41:48.255350 593434 client.go:168] LocalClient.Create starting
I0906 23:41:48.255443 593434 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca.pem
I0906 23:41:48.255482 593434 main.go:141] libmachine: Decoding PEM data...
I0906 23:41:48.255499 593434 main.go:141] libmachine: Parsing certificate...
I0906 23:41:48.255576 593434 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17174-571027/.minikube/certs/cert.pem
I0906 23:41:48.255597 593434 main.go:141] libmachine: Decoding PEM data...
I0906 23:41:48.255605 593434 main.go:141] libmachine: Parsing certificate...
I0906 23:41:48.256023 593434 cli_runner.go:164] Run: docker network inspect dockerenv-198309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0906 23:41:48.271688 593434 cli_runner.go:211] docker network inspect dockerenv-198309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0906 23:41:48.271759 593434 network_create.go:281] running [docker network inspect dockerenv-198309] to gather additional debugging logs...
I0906 23:41:48.271773 593434 cli_runner.go:164] Run: docker network inspect dockerenv-198309
W0906 23:41:48.288301 593434 cli_runner.go:211] docker network inspect dockerenv-198309 returned with exit code 1
I0906 23:41:48.288321 593434 network_create.go:284] error running [docker network inspect dockerenv-198309]: docker network inspect dockerenv-198309: exit status 1
stdout:
[]
stderr:
Error response from daemon: network dockerenv-198309 not found
I0906 23:41:48.288330 593434 network_create.go:286] output of [docker network inspect dockerenv-198309]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network dockerenv-198309 not found
** /stderr **
I0906 23:41:48.288385 593434 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0906 23:41:48.304435 593434 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00138f600}
I0906 23:41:48.304470 593434 network_create.go:123] attempt to create docker network dockerenv-198309 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0906 23:41:48.304523 593434 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-198309 dockerenv-198309
I0906 23:41:48.357454 593434 network_create.go:107] docker network dockerenv-198309 192.168.49.0/24 created
I0906 23:41:48.357473 593434 kic.go:117] calculated static IP "192.168.49.2" for the "dockerenv-198309" container
I0906 23:41:48.357535 593434 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0906 23:41:48.372616 593434 cli_runner.go:164] Run: docker volume create dockerenv-198309 --label name.minikube.sigs.k8s.io=dockerenv-198309 --label created_by.minikube.sigs.k8s.io=true
I0906 23:41:48.388639 593434 oci.go:103] Successfully created a docker volume dockerenv-198309
I0906 23:41:48.388703 593434 cli_runner.go:164] Run: docker run --rm --name dockerenv-198309-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-198309 --entrypoint /usr/bin/test -v dockerenv-198309:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -d /var/lib
I0906 23:41:48.898597 593434 oci.go:107] Successfully prepared a docker volume dockerenv-198309
I0906 23:41:48.898635 593434 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
I0906 23:41:48.898656 593434 kic.go:190] Starting extracting preloaded images to volume ...
I0906 23:41:48.898722 593434 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17174-571027/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-198309:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir
I0906 23:41:53.938120 593434 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17174-571027/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-198309:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b -I lz4 -xf /preloaded.tar -C /extractDir: (5.039341981s)
I0906 23:41:53.938169 593434 kic.go:199] duration metric: took 5.039510 seconds to extract preloaded images to volume
W0906 23:41:53.938561 593434 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0906 23:41:53.938683 593434 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0906 23:41:53.989614 593434 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-198309 --name dockerenv-198309 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-198309 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-198309 --network dockerenv-198309 --ip 192.168.49.2 --volume dockerenv-198309:/var --security-opt apparmor=unconfined --memory=8000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
I0906 23:41:54.294045 593434 cli_runner.go:164] Run: docker container inspect dockerenv-198309 --format={{.State.Running}}
I0906 23:41:54.310923 593434 cli_runner.go:164] Run: docker container inspect dockerenv-198309 --format={{.State.Status}}
I0906 23:41:54.328475 593434 cli_runner.go:164] Run: docker exec dockerenv-198309 stat /var/lib/dpkg/alternatives/iptables
I0906 23:41:54.405989 593434 oci.go:144] the created container "dockerenv-198309" has a running status.
I0906 23:41:54.406027 593434 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa...
I0906 23:41:54.476298 593434 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0906 23:41:54.496461 593434 cli_runner.go:164] Run: docker container inspect dockerenv-198309 --format={{.State.Status}}
I0906 23:41:54.515104 593434 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0906 23:41:54.515118 593434 kic_runner.go:114] Args: [docker exec --privileged dockerenv-198309 chown docker:docker /home/docker/.ssh/authorized_keys]
I0906 23:41:54.578474 593434 cli_runner.go:164] Run: docker container inspect dockerenv-198309 --format={{.State.Status}}
I0906 23:41:54.597644 593434 machine.go:88] provisioning docker machine ...
I0906 23:41:54.597677 593434 ubuntu.go:169] provisioning hostname "dockerenv-198309"
I0906 23:41:54.597728 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:41:54.620125 593434 main.go:141] libmachine: Using SSH client type: native
I0906 23:41:54.620814 593434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil> [] 0s} 127.0.0.1 32777 <nil> <nil>}
I0906 23:41:54.620833 593434 main.go:141] libmachine: About to run SSH command:
sudo hostname dockerenv-198309 && echo "dockerenv-198309" | sudo tee /etc/hostname
I0906 23:41:54.621561 593434 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43902->127.0.0.1:32777: read: connection reset by peer
I0906 23:41:57.758442 593434 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-198309
I0906 23:41:57.758519 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:41:57.775043 593434 main.go:141] libmachine: Using SSH client type: native
I0906 23:41:57.775428 593434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil> [] 0s} 127.0.0.1 32777 <nil> <nil>}
I0906 23:41:57.775441 593434 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sdockerenv-198309' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-198309/g' /etc/hosts;
else
echo '127.0.1.1 dockerenv-198309' | sudo tee -a /etc/hosts;
fi
fi
I0906 23:41:57.904032 593434 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0906 23:41:57.904060 593434 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17174-571027/.minikube CaCertPath:/home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17174-571027/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17174-571027/.minikube}
I0906 23:41:57.904086 593434 ubuntu.go:177] setting up certificates
I0906 23:41:57.904095 593434 provision.go:83] configureAuth start
I0906 23:41:57.904150 593434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-198309
I0906 23:41:57.920514 593434 provision.go:138] copyHostCerts
I0906 23:41:57.920568 593434 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-571027/.minikube/ca.pem, removing ...
I0906 23:41:57.920574 593434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-571027/.minikube/ca.pem
I0906 23:41:57.920634 593434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17174-571027/.minikube/ca.pem (1078 bytes)
I0906 23:41:57.920723 593434 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-571027/.minikube/cert.pem, removing ...
I0906 23:41:57.920726 593434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-571027/.minikube/cert.pem
I0906 23:41:57.920748 593434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-571027/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17174-571027/.minikube/cert.pem (1123 bytes)
I0906 23:41:57.920794 593434 exec_runner.go:144] found /home/jenkins/minikube-integration/17174-571027/.minikube/key.pem, removing ...
I0906 23:41:57.920797 593434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17174-571027/.minikube/key.pem
I0906 23:41:57.920814 593434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17174-571027/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17174-571027/.minikube/key.pem (1679 bytes)
I0906 23:41:57.920854 593434 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17174-571027/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca-key.pem org=jenkins.dockerenv-198309 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube dockerenv-198309]
I0906 23:41:58.086735 593434 provision.go:172] copyRemoteCerts
I0906 23:41:58.086793 593434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0906 23:41:58.086829 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:41:58.102907 593434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa Username:docker}
I0906 23:41:58.192309 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0906 23:41:58.213432 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0906 23:41:58.233654 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0906 23:41:58.256964 593434 provision.go:86] duration metric: configureAuth took 352.859191ms
I0906 23:41:58.256981 593434 ubuntu.go:193] setting minikube options for container-runtime
I0906 23:41:58.257153 593434 config.go:182] Loaded profile config "dockerenv-198309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:41:58.257158 593434 machine.go:91] provisioned docker machine in 3.659505009s
I0906 23:41:58.257163 593434 client.go:171] LocalClient.Create took 10.001809529s
I0906 23:41:58.257193 593434 start.go:167] duration metric: libmachine.API.Create for "dockerenv-198309" took 10.001856241s
I0906 23:41:58.257200 593434 start.go:300] post-start starting for "dockerenv-198309" (driver="docker")
I0906 23:41:58.257207 593434 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0906 23:41:58.257249 593434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0906 23:41:58.257285 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:41:58.273409 593434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa Username:docker}
I0906 23:41:58.364520 593434 ssh_runner.go:195] Run: cat /etc/os-release
I0906 23:41:58.367449 593434 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0906 23:41:58.367472 593434 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0906 23:41:58.367478 593434 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0906 23:41:58.367483 593434 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0906 23:41:58.367491 593434 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-571027/.minikube/addons for local assets ...
I0906 23:41:58.367538 593434 filesync.go:126] Scanning /home/jenkins/minikube-integration/17174-571027/.minikube/files for local assets ...
I0906 23:41:58.367552 593434 start.go:303] post-start completed in 110.347856ms
I0906 23:41:58.367833 593434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-198309
I0906 23:41:58.384928 593434 profile.go:148] Saving config to /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/config.json ...
I0906 23:41:58.385153 593434 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0906 23:41:58.385209 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:41:58.401093 593434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa Username:docker}
I0906 23:41:58.488808 593434 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0906 23:41:58.492827 593434 start.go:128] duration metric: createHost completed in 10.239672937s
I0906 23:41:58.492843 593434 start.go:83] releasing machines lock for "dockerenv-198309", held for 10.239821357s
I0906 23:41:58.492906 593434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-198309
I0906 23:41:58.509422 593434 ssh_runner.go:195] Run: cat /version.json
I0906 23:41:58.509448 593434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0906 23:41:58.509473 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:41:58.509503 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:41:58.526818 593434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa Username:docker}
I0906 23:41:58.526818 593434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa Username:docker}
I0906 23:41:58.713207 593434 ssh_runner.go:195] Run: systemctl --version
I0906 23:41:58.717560 593434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0906 23:41:58.721444 593434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0906 23:41:58.743379 593434 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0906 23:41:58.743436 593434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0906 23:41:58.767776 593434 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0906 23:41:58.767790 593434 start.go:466] detecting cgroup driver to use...
I0906 23:41:58.767819 593434 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0906 23:41:58.767859 593434 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0906 23:41:58.778803 593434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0906 23:41:58.788800 593434 docker.go:196] disabling cri-docker service (if available) ...
I0906 23:41:58.788840 593434 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0906 23:41:58.800811 593434 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0906 23:41:58.813590 593434 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0906 23:41:58.889554 593434 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0906 23:41:58.964288 593434 docker.go:212] disabling docker service ...
I0906 23:41:58.964344 593434 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0906 23:41:58.984191 593434 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0906 23:41:58.994389 593434 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0906 23:41:59.068261 593434 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0906 23:41:59.140162 593434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0906 23:41:59.150225 593434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0906 23:41:59.164454 593434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0906 23:41:59.172956 593434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0906 23:41:59.181386 593434 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0906 23:41:59.181438 593434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0906 23:41:59.189895 593434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0906 23:41:59.198175 593434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0906 23:41:59.206421 593434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0906 23:41:59.214608 593434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0906 23:41:59.222336 593434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0906 23:41:59.230615 593434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0906 23:41:59.237782 593434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0906 23:41:59.245063 593434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0906 23:41:59.318937 593434 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0906 23:41:59.412922 593434 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
I0906 23:41:59.413004 593434 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0906 23:41:59.416585 593434 start.go:534] Will wait 60s for crictl version
I0906 23:41:59.416637 593434 ssh_runner.go:195] Run: which crictl
I0906 23:41:59.419701 593434 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0906 23:41:59.451531 593434 start.go:550] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.6.22
RuntimeApiVersion: v1
I0906 23:41:59.451593 593434 ssh_runner.go:195] Run: containerd --version
I0906 23:41:59.475005 593434 ssh_runner.go:195] Run: containerd --version
I0906 23:41:59.501239 593434 out.go:177] * Preparing Kubernetes v1.28.1 on containerd 1.6.22 ...
I0906 23:41:59.502560 593434 cli_runner.go:164] Run: docker network inspect dockerenv-198309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0906 23:41:59.518457 593434 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0906 23:41:59.521979 593434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0906 23:41:59.531841 593434 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
I0906 23:41:59.531880 593434 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:41:59.562861 593434 containerd.go:604] all images are preloaded for containerd runtime.
I0906 23:41:59.562872 593434 containerd.go:518] Images already preloaded, skipping extraction
I0906 23:41:59.562912 593434 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 23:41:59.593919 593434 containerd.go:604] all images are preloaded for containerd runtime.
I0906 23:41:59.593937 593434 cache_images.go:84] Images are preloaded, skipping loading
I0906 23:41:59.593984 593434 ssh_runner.go:195] Run: sudo crictl info
I0906 23:41:59.626861 593434 cni.go:84] Creating CNI manager for ""
I0906 23:41:59.626871 593434 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0906 23:41:59.626888 593434 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0906 23:41:59.626904 593434 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-198309 NodeName:dockerenv-198309 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0906 23:41:59.627022 593434 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "dockerenv-198309"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0906 23:41:59.627082 593434 kubeadm.go:976] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=dockerenv-198309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.28.1 ClusterName:dockerenv-198309 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0906 23:41:59.627124 593434 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
I0906 23:41:59.635106 593434 binaries.go:44] Found k8s binaries, skipping transfer
I0906 23:41:59.635158 593434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0906 23:41:59.643025 593434 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
I0906 23:41:59.658479 593434 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0906 23:41:59.673971 593434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
I0906 23:41:59.689272 593434 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0906 23:41:59.692310 593434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0906 23:41:59.701669 593434 certs.go:56] Setting up /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309 for IP: 192.168.49.2
I0906 23:41:59.701697 593434 certs.go:190] acquiring lock for shared ca certs: {Name:mkf023f174f877b5e876cc103f58e9ab0cfa5d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:41:59.701848 593434 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17174-571027/.minikube/ca.key
I0906 23:41:59.701898 593434 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17174-571027/.minikube/proxy-client-ca.key
I0906 23:41:59.701941 593434 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/client.key
I0906 23:41:59.701952 593434 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/client.crt with IP's: []
I0906 23:41:59.859247 593434 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/client.crt ...
I0906 23:41:59.859264 593434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/client.crt: {Name:mkeb55b1df4595214c09d4db984b8a459c296f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:41:59.859442 593434 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/client.key ...
I0906 23:41:59.859448 593434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/client.key: {Name:mk43292d3e28e2ef0f53087d8cf13ed599e98d0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:41:59.859520 593434 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.key.dd3b5fb2
I0906 23:41:59.859528 593434 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0906 23:41:59.956269 593434 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.crt.dd3b5fb2 ...
I0906 23:41:59.956286 593434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.crt.dd3b5fb2: {Name:mk6ad5a5aaf404a8a56e80b0479657dedb5bb07f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:41:59.956433 593434 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.key.dd3b5fb2 ...
I0906 23:41:59.956439 593434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.key.dd3b5fb2: {Name:mkebb84dc5e2b5bc34bbcf7c6f1f8e6584861d2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:41:59.956503 593434 certs.go:337] copying /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.crt
I0906 23:41:59.956562 593434 certs.go:341] copying /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.key
I0906 23:41:59.956603 593434 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/proxy-client.key
I0906 23:41:59.956612 593434 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/proxy-client.crt with IP's: []
I0906 23:42:00.126114 593434 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/proxy-client.crt ...
I0906 23:42:00.126130 593434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/proxy-client.crt: {Name:mk38f9c1a950405c24ce8f32bcdb6d49715af366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:42:00.126299 593434 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/proxy-client.key ...
I0906 23:42:00.126304 593434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/proxy-client.key: {Name:mk8ac0965fa7f57e95f72dc73a2aa82590337526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:42:00.126523 593434 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-571027/.minikube/certs/home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca-key.pem (1675 bytes)
I0906 23:42:00.126555 593434 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-571027/.minikube/certs/home/jenkins/minikube-integration/17174-571027/.minikube/certs/ca.pem (1078 bytes)
I0906 23:42:00.126575 593434 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-571027/.minikube/certs/home/jenkins/minikube-integration/17174-571027/.minikube/certs/cert.pem (1123 bytes)
I0906 23:42:00.126595 593434 certs.go:437] found cert: /home/jenkins/minikube-integration/17174-571027/.minikube/certs/home/jenkins/minikube-integration/17174-571027/.minikube/certs/key.pem (1679 bytes)
I0906 23:42:00.127160 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0906 23:42:00.149470 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0906 23:42:00.170243 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0906 23:42:00.190560 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/profiles/dockerenv-198309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0906 23:42:00.210969 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0906 23:42:00.231311 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0906 23:42:00.251843 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0906 23:42:00.272398 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0906 23:42:00.293110 593434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17174-571027/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0906 23:42:00.313579 593434 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0906 23:42:00.328924 593434 ssh_runner.go:195] Run: openssl version
I0906 23:42:00.333763 593434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0906 23:42:00.342044 593434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0906 23:42:00.345161 593434 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 6 23:38 /usr/share/ca-certificates/minikubeCA.pem
I0906 23:42:00.345193 593434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0906 23:42:00.351222 593434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0906 23:42:00.359296 593434 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0906 23:42:00.362223 593434 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0906 23:42:00.362292 593434 kubeadm.go:404] StartCluster: {Name:dockerenv-198309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:dockerenv-198309 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0906 23:42:00.362381 593434 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0906 23:42:00.362418 593434 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0906 23:42:00.394717 593434 cri.go:89] found id: ""
I0906 23:42:00.394780 593434 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0906 23:42:00.403019 593434 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0906 23:42:00.410936 593434 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0906 23:42:00.410990 593434 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0906 23:42:00.418685 593434 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0906 23:42:00.418718 593434 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0906 23:42:00.461976 593434 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
I0906 23:42:00.462073 593434 kubeadm.go:322] [preflight] Running pre-flight checks
I0906 23:42:00.497829 593434 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
I0906 23:42:00.497938 593434 kubeadm.go:322] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1040-gcp[0m
I0906 23:42:00.498021 593434 kubeadm.go:322] [0;37mOS[0m: [0;32mLinux[0m
I0906 23:42:00.498067 593434 kubeadm.go:322] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0906 23:42:00.498141 593434 kubeadm.go:322] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0906 23:42:00.498189 593434 kubeadm.go:322] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0906 23:42:00.498226 593434 kubeadm.go:322] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0906 23:42:00.498273 593434 kubeadm.go:322] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0906 23:42:00.498312 593434 kubeadm.go:322] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0906 23:42:00.498348 593434 kubeadm.go:322] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0906 23:42:00.498392 593434 kubeadm.go:322] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0906 23:42:00.498429 593434 kubeadm.go:322] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0906 23:42:00.563970 593434 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0906 23:42:00.564092 593434 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0906 23:42:00.564212 593434 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0906 23:42:00.751587 593434 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0906 23:42:00.755135 593434 out.go:204] - Generating certificates and keys ...
I0906 23:42:00.755279 593434 kubeadm.go:322] [certs] Using existing ca certificate authority
I0906 23:42:00.755390 593434 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0906 23:42:01.016703 593434 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0906 23:42:01.073754 593434 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0906 23:42:01.347338 593434 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0906 23:42:01.623464 593434 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0906 23:42:01.851984 593434 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0906 23:42:01.852153 593434 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [dockerenv-198309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0906 23:42:02.067202 593434 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0906 23:42:02.067319 593434 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-198309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0906 23:42:02.255667 593434 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0906 23:42:02.328893 593434 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0906 23:42:02.421771 593434 kubeadm.go:322] [certs] Generating "sa" key and public key
I0906 23:42:02.421836 593434 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0906 23:42:02.659964 593434 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0906 23:42:02.722792 593434 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0906 23:42:02.909550 593434 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0906 23:42:03.111352 593434 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0906 23:42:03.111810 593434 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0906 23:42:03.114605 593434 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0906 23:42:03.116401 593434 out.go:204] - Booting up control plane ...
I0906 23:42:03.116549 593434 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0906 23:42:03.116650 593434 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0906 23:42:03.117148 593434 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0906 23:42:03.128351 593434 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0906 23:42:03.129017 593434 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0906 23:42:03.129067 593434 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0906 23:42:03.205120 593434 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0906 23:42:08.707041 593434 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.501889 seconds
I0906 23:42:08.707201 593434 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0906 23:42:08.719673 593434 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0906 23:42:09.239047 593434 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0906 23:42:09.239315 593434 kubeadm.go:322] [mark-control-plane] Marking the node dockerenv-198309 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0906 23:42:09.750292 593434 kubeadm.go:322] [bootstrap-token] Using token: vv647o.9xk8809pqjjfwhll
I0906 23:42:09.751974 593434 out.go:204] - Configuring RBAC rules ...
I0906 23:42:09.752107 593434 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0906 23:42:09.756589 593434 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0906 23:42:09.762576 593434 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0906 23:42:09.765580 593434 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0906 23:42:09.768102 593434 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0906 23:42:09.770528 593434 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0906 23:42:09.781647 593434 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0906 23:42:09.983215 593434 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0906 23:42:10.218019 593434 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0906 23:42:10.219372 593434 kubeadm.go:322]
I0906 23:42:10.219467 593434 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0906 23:42:10.219473 593434 kubeadm.go:322]
I0906 23:42:10.219572 593434 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0906 23:42:10.219577 593434 kubeadm.go:322]
I0906 23:42:10.219615 593434 kubeadm.go:322] mkdir -p $HOME/.kube
I0906 23:42:10.219685 593434 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0906 23:42:10.219754 593434 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0906 23:42:10.219757 593434 kubeadm.go:322]
I0906 23:42:10.219840 593434 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0906 23:42:10.219856 593434 kubeadm.go:322]
I0906 23:42:10.219931 593434 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0906 23:42:10.219952 593434 kubeadm.go:322]
I0906 23:42:10.220014 593434 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0906 23:42:10.220104 593434 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0906 23:42:10.220204 593434 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0906 23:42:10.220220 593434 kubeadm.go:322]
I0906 23:42:10.220316 593434 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0906 23:42:10.220440 593434 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0906 23:42:10.220449 593434 kubeadm.go:322]
I0906 23:42:10.220553 593434 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vv647o.9xk8809pqjjfwhll \
I0906 23:42:10.220698 593434 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:a2ada76268d5d488ce28944d82faeb083cfa6d709381a798f3683d6c7ba77a82 \
I0906 23:42:10.220725 593434 kubeadm.go:322] --control-plane
I0906 23:42:10.220731 593434 kubeadm.go:322]
I0906 23:42:10.220837 593434 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0906 23:42:10.220842 593434 kubeadm.go:322]
I0906 23:42:10.220945 593434 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vv647o.9xk8809pqjjfwhll \
I0906 23:42:10.221089 593434 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:a2ada76268d5d488ce28944d82faeb083cfa6d709381a798f3683d6c7ba77a82
I0906 23:42:10.223975 593434 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1040-gcp\n", err: exit status 1
I0906 23:42:10.224139 593434 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0906 23:42:10.224158 593434 cni.go:84] Creating CNI manager for ""
I0906 23:42:10.224166 593434 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0906 23:42:10.225794 593434 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0906 23:42:10.227102 593434 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0906 23:42:10.230765 593434 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
I0906 23:42:10.230776 593434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0906 23:42:10.247972 593434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0906 23:42:10.925552 593434 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0906 23:42:10.925676 593434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0906 23:42:10.925675 593434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2 minikube.k8s.io/name=dockerenv-198309 minikube.k8s.io/updated_at=2023_09_06T23_42_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0906 23:42:11.017230 593434 ops.go:34] apiserver oom_adj: -16
I0906 23:42:11.026107 593434 kubeadm.go:1081] duration metric: took 100.505627ms to wait for elevateKubeSystemPrivileges.
I0906 23:42:11.026136 593434 kubeadm.go:406] StartCluster complete in 10.663880198s
I0906 23:42:11.026157 593434 settings.go:142] acquiring lock: {Name:mk714b238c1478a3a25e7d657d1a550b0f350806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:42:11.026234 593434 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17174-571027/kubeconfig
I0906 23:42:11.026836 593434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17174-571027/kubeconfig: {Name:mk612c776b3ecd195073ca96afaf72d304290efb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0906 23:42:11.027060 593434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0906 23:42:11.027071 593434 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0906 23:42:11.027148 593434 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-198309"
I0906 23:42:11.027160 593434 addons.go:69] Setting default-storageclass=true in profile "dockerenv-198309"
I0906 23:42:11.027164 593434 addons.go:231] Setting addon storage-provisioner=true in "dockerenv-198309"
I0906 23:42:11.027177 593434 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-198309"
I0906 23:42:11.027224 593434 host.go:66] Checking if "dockerenv-198309" exists ...
I0906 23:42:11.027251 593434 config.go:182] Loaded profile config "dockerenv-198309": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0906 23:42:11.027556 593434 cli_runner.go:164] Run: docker container inspect dockerenv-198309 --format={{.State.Status}}
I0906 23:42:11.027731 593434 cli_runner.go:164] Run: docker container inspect dockerenv-198309 --format={{.State.Status}}
I0906 23:42:11.049064 593434 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0906 23:42:11.048711 593434 kapi.go:248] "coredns" deployment in "kube-system" namespace and "dockerenv-198309" context rescaled to 1 replicas
I0906 23:42:11.050569 593434 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0906 23:42:11.050554 593434 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0906 23:42:11.050582 593434 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0906 23:42:11.052104 593434 out.go:177] * Verifying Kubernetes components...
I0906 23:42:11.050636 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:42:11.051015 593434 addons.go:231] Setting addon default-storageclass=true in "dockerenv-198309"
I0906 23:42:11.053437 593434 host.go:66] Checking if "dockerenv-198309" exists ...
I0906 23:42:11.053445 593434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0906 23:42:11.053951 593434 cli_runner.go:164] Run: docker container inspect dockerenv-198309 --format={{.State.Status}}
I0906 23:42:11.071677 593434 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0906 23:42:11.071691 593434 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0906 23:42:11.071745 593434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-198309
I0906 23:42:11.074279 593434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa Username:docker}
I0906 23:42:11.091585 593434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32777 SSHKeyPath:/home/jenkins/minikube-integration/17174-571027/.minikube/machines/dockerenv-198309/id_rsa Username:docker}
I0906 23:42:11.122257 593434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0906 23:42:11.122838 593434 api_server.go:52] waiting for apiserver process to appear ...
I0906 23:42:11.122878 593434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0906 23:42:11.234009 593434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0906 23:42:11.236392 593434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0906 23:42:11.729193 593434 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
I0906 23:42:11.729264 593434 api_server.go:72] duration metric: took 678.670045ms to wait for apiserver process to appear ...
I0906 23:42:11.729279 593434 api_server.go:88] waiting for apiserver healthz status ...
I0906 23:42:11.729298 593434 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0906 23:42:11.735957 593434 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
ok
I0906 23:42:11.737466 593434 api_server.go:141] control plane version: v1.28.1
I0906 23:42:11.737482 593434 api_server.go:131] duration metric: took 8.196631ms to wait for apiserver health ...
I0906 23:42:11.737490 593434 system_pods.go:43] waiting for kube-system pods to appear ...
I0906 23:42:11.743727 593434 system_pods.go:59] 4 kube-system pods found
I0906 23:42:11.743755 593434 system_pods.go:61] "etcd-dockerenv-198309" [b42d0f6c-333d-4164-958c-fb7b22199090] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0906 23:42:11.743763 593434 system_pods.go:61] "kube-apiserver-dockerenv-198309" [1fead860-6bd4-42ed-b7ac-3ef9513b8ed7] Running
I0906 23:42:11.743770 593434 system_pods.go:61] "kube-controller-manager-dockerenv-198309" [fd7f9104-f2f3-49f8-a6c5-7422679a6d50] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0906 23:42:11.743777 593434 system_pods.go:61] "kube-scheduler-dockerenv-198309" [5be4eaea-e0fb-406e-a01a-871199118fa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0906 23:42:11.743782 593434 system_pods.go:74] duration metric: took 6.287741ms to wait for pod list to return data ...
I0906 23:42:11.743790 593434 kubeadm.go:581] duration metric: took 693.204566ms to wait for : map[apiserver:true system_pods:true] ...
I0906 23:42:11.743803 593434 node_conditions.go:102] verifying NodePressure condition ...
I0906 23:42:11.746646 593434 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0906 23:42:11.746657 593434 node_conditions.go:123] node cpu capacity is 8
I0906 23:42:11.746665 593434 node_conditions.go:105] duration metric: took 2.859431ms to run NodePressure ...
I0906 23:42:11.746675 593434 start.go:228] waiting for startup goroutines ...
I0906 23:42:11.918235 593434 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0906 23:42:11.920110 593434 addons.go:502] enable addons completed in 893.033766ms: enabled=[storage-provisioner default-storageclass]
I0906 23:42:11.920146 593434 start.go:233] waiting for cluster config update ...
I0906 23:42:11.920161 593434 start.go:242] writing updated cluster config ...
I0906 23:42:11.920417 593434 ssh_runner.go:195] Run: rm -f paused
I0906 23:42:11.968917 593434 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
I0906 23:42:11.970934 593434 out.go:177] * Done! kubectl is now configured to use "dockerenv-198309" cluster and "default" namespace by default
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
2a79d424124c5 b0b1fa0f58c6e Less than a second ago Running kindnet-cni 0 b0849656201d6 kindnet-q9bjh
fd35967b6835e 6cdbabde3874e Less than a second ago Running kube-proxy 0 dbd180a5fea6b kube-proxy-r8mwq
f40f12bd36f71 6e38f40d628db Less than a second ago Exited storage-provisioner 1 8e0577e52e6aa storage-provisioner
481ad4ba0e372 6e38f40d628db 1 second ago Exited storage-provisioner 0 8e0577e52e6aa storage-provisioner
81b812022fc4c b462ce0c8b1ff 19 seconds ago Running kube-scheduler 0 0ee2c88e327b6 kube-scheduler-dockerenv-198309
7a26b25f63553 73deb9a3f7025 19 seconds ago Running etcd 0 e8987b2340a59 etcd-dockerenv-198309
ee64d925e7a7b 5c801295c21d0 19 seconds ago Running kube-apiserver 0 3121d7a7e88f1 kube-apiserver-dockerenv-198309
21d8118aaae85 821b3dfea27be 19 seconds ago Running kube-controller-manager 0 e84ca5c76e9a2 kube-controller-manager-dockerenv-198309
*
* ==> containerd <==
* Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.277337338Z" level=warning msg="cleaning up after shim disconnected" id=f40f12bd36f71009472d584285a195a8bc828ad3a32d7aa0e87b3873380a4b7b namespace=k8s.io
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.277353219Z" level=info msg="cleaning up dead shim"
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.316494312Z" level=warning msg="cleanup warnings time=\"2023-09-06T23:42:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1896 runtime=io.containerd.runc.v2\n"
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.524413630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8mwq,Uid:b16a92a9-b232-4fa9-b437-c1694e6728ac,Namespace:kube-system,Attempt:0,}"
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.525958331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-q9bjh,Uid:a9ed2b6a-a209-41c6-8cde-e896a26f2664,Namespace:kube-system,Attempt:0,}"
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.545082420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.545158181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.545168104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.545374272Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbd180a5fea6bccc06f39ece56f98ef0c1bfb16977e6a58b035256ae190ad3f8 pid=1926 runtime=io.containerd.runc.v2
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.547567148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.547644756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.547661137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.547876440Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0849656201d6668b0f9f7473ed2ddd668852ff4f8daeef45176b74db61ad8f2 pid=1938 runtime=io.containerd.runc.v2
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.594407637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8mwq,Uid:b16a92a9-b232-4fa9-b437-c1694e6728ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbd180a5fea6bccc06f39ece56f98ef0c1bfb16977e6a58b035256ae190ad3f8\""
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.597153366Z" level=info msg="CreateContainer within sandbox \"dbd180a5fea6bccc06f39ece56f98ef0c1bfb16977e6a58b035256ae190ad3f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.610322124Z" level=info msg="CreateContainer within sandbox \"dbd180a5fea6bccc06f39ece56f98ef0c1bfb16977e6a58b035256ae190ad3f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd35967b6835ea0df8d4f2051a4c892a7f9979c97cabd16e2d577b0c3a9b8203\""
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.610799506Z" level=info msg="StartContainer for \"fd35967b6835ea0df8d4f2051a4c892a7f9979c97cabd16e2d577b0c3a9b8203\""
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.638316194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-gb5v8,Uid:3b4db2a2-c95a-40bf-be4a-d524690bfc7a,Namespace:kube-system,Attempt:0,}"
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.663620552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-gb5v8,Uid:3b4db2a2-c95a-40bf-be4a-d524690bfc7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\": failed to find network info for sandbox \"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\""
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.720147070Z" level=info msg="StartContainer for \"fd35967b6835ea0df8d4f2051a4c892a7f9979c97cabd16e2d577b0c3a9b8203\" returns successfully"
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.834072568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-q9bjh,Uid:a9ed2b6a-a209-41c6-8cde-e896a26f2664,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0849656201d6668b0f9f7473ed2ddd668852ff4f8daeef45176b74db61ad8f2\""
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.837138416Z" level=info msg="CreateContainer within sandbox \"b0849656201d6668b0f9f7473ed2ddd668852ff4f8daeef45176b74db61ad8f2\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.850282661Z" level=info msg="CreateContainer within sandbox \"b0849656201d6668b0f9f7473ed2ddd668852ff4f8daeef45176b74db61ad8f2\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"2a79d424124c56ce9e492a5666638bf0e858c19270135eaa14f0b3606e819ba8\""
Sep 06 23:42:23 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:23.850875999Z" level=info msg="StartContainer for \"2a79d424124c56ce9e492a5666638bf0e858c19270135eaa14f0b3606e819ba8\""
Sep 06 23:42:24 dockerenv-198309 containerd[787]: time="2023-09-06T23:42:24.035484367Z" level=info msg="StartContainer for \"2a79d424124c56ce9e492a5666638bf0e858c19270135eaa14f0b3606e819ba8\" returns successfully"
*
* ==> describe nodes <==
* Name: dockerenv-198309
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=dockerenv-198309
kubernetes.io/os=linux
minikube.k8s.io/commit=cf47a38f14700a28a638c18f21764b75f0a296b2
minikube.k8s.io/name=dockerenv-198309
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_09_06T23_42_10_0700
minikube.k8s.io/version=v1.31.2
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 06 Sep 2023 23:42:06 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: dockerenv-198309
AcquireTime: <unset>
RenewTime: Wed, 06 Sep 2023 23:42:20 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 06 Sep 2023 23:42:10 +0000 Wed, 06 Sep 2023 23:42:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 06 Sep 2023 23:42:10 +0000 Wed, 06 Sep 2023 23:42:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 06 Sep 2023 23:42:10 +0000 Wed, 06 Sep 2023 23:42:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 06 Sep 2023 23:42:10 +0000 Wed, 06 Sep 2023 23:42:10 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: dockerenv-198309
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859440Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859440Ki
pods: 110
System Info:
Machine ID: 94c627f0028149fb94142c5db0c1064f
System UUID: ea526f2b-2bbb-4622-bb30-396cdedac15b
Boot ID: e5950180-8f0a-478c-94d9-54ec6c368fa0
Kernel Version: 5.15.0-1040-gcp
OS Image: Ubuntu 22.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.22
Kubelet Version: v1.28.1
Kube-Proxy Version: v1.28.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-5dd5756b68-gb5v8 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 1s
kube-system etcd-dockerenv-198309 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 14s
kube-system kindnet-q9bjh 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 1s
kube-system kube-apiserver-dockerenv-198309 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15s
kube-system kube-controller-manager-dockerenv-198309 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 16s
kube-system kube-proxy-r8mwq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 1s
kube-system kube-scheduler-dockerenv-198309 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 14s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 13s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%!)(MISSING) 100m (1%!)(MISSING)
memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 0s kube-proxy
Normal Starting 21s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 21s (x8 over 21s) kubelet Node dockerenv-198309 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21s (x8 over 21s) kubelet Node dockerenv-198309 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21s (x7 over 21s) kubelet Node dockerenv-198309 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 21s kubelet Updated Node Allocatable limit across pods
Normal Starting 14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 14s kubelet Node dockerenv-198309 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 14s kubelet Node dockerenv-198309 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 14s kubelet Node dockerenv-198309 status is now: NodeHasSufficientPID
Normal NodeNotReady 14s kubelet Node dockerenv-198309 status is now: NodeNotReady
Normal NodeAllocatableEnforced 14s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 14s kubelet Node dockerenv-198309 status is now: NodeReady
Normal RegisteredNode 2s node-controller Node dockerenv-198309 event: Registered Node dockerenv-198309 in Controller
*
* ==> dmesg <==
* [Sep 6 21:12] kauditd_printk_skb: 3 callbacks suppressed
*
* ==> etcd [7a26b25f63553e797ae6a40719b6a758f2e21b64dcf8b9f157687ce5a4280728] <==
* {"level":"info","ts":"2023-09-06T23:42:04.737011Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2023-09-06T23:42:04.734992Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2023-09-06T23:42:04.738567Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-09-06T23:42:04.738735Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-09-06T23:42:04.738773Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-09-06T23:42:04.738819Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2023-09-06T23:42:04.738839Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2023-09-06T23:42:05.027678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2023-09-06T23:42:05.027726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2023-09-06T23:42:05.027758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2023-09-06T23:42:05.027781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2023-09-06T23:42:05.027794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2023-09-06T23:42:05.027808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2023-09-06T23:42:05.027821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2023-09-06T23:42:05.028719Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-06T23:42:05.02938Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-09-06T23:42:05.029402Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-09-06T23:42:05.029376Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:dockerenv-198309 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2023-09-06T23:42:05.029666Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-06T23:42:05.02979Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-06T23:42:05.029865Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-06T23:42:05.03003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-09-06T23:42:05.030102Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-09-06T23:42:05.030682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
{"level":"info","ts":"2023-09-06T23:42:05.030833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
*
* ==> kernel <==
* 23:42:24 up 5:20, 0 users, load average: 2.35, 1.88, 1.68
Linux dockerenv-198309 5.15.0-1040-gcp #48~20.04.1-Ubuntu SMP Fri Aug 25 04:03:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.3 LTS"
*
* ==> kindnet [2a79d424124c56ce9e492a5666638bf0e858c19270135eaa14f0b3606e819ba8] <==
* I0906 23:42:24.217359 1 main.go:102] connected to apiserver: https://10.96.0.1:443
I0906 23:42:24.217434 1 main.go:107] hostIP = 192.168.49.2
podIP = 192.168.49.2
I0906 23:42:24.217561 1 main.go:116] setting mtu 1500 for CNI
I0906 23:42:24.217583 1 main.go:146] kindnetd IP family: "ipv4"
I0906 23:42:24.217603 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
*
* ==> kube-apiserver [ee64d925e7a7b0e7416d2355651d7e7a3e0a8403b7d6b0cb30cf015be0fa1514] <==
* I0906 23:42:06.932209 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0906 23:42:06.932216 1 cache.go:39] Caches are synced for autoregister controller
I0906 23:42:06.932288 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0906 23:42:06.928242 1 shared_informer.go:318] Caches are synced for configmaps
I0906 23:42:06.931721 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0906 23:42:06.931738 1 apf_controller.go:377] Running API Priority and Fairness config worker
I0906 23:42:06.933429 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I0906 23:42:06.934098 1 controller.go:624] quota admission added evaluator for: namespaces
E0906 23:42:07.031163 1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
I0906 23:42:07.116381 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I0906 23:42:07.830626 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0906 23:42:07.834208 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0906 23:42:07.834229 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0906 23:42:08.197204 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0906 23:42:08.227308 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0906 23:42:08.333821 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0906 23:42:08.339022 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I0906 23:42:08.339904 1 controller.go:624] quota admission added evaluator for: endpoints
I0906 23:42:08.343791 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0906 23:42:08.848106 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0906 23:42:09.971878 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0906 23:42:09.981854 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0906 23:42:09.990462 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I0906 23:42:22.952815 1 controller.go:624] quota admission added evaluator for: replicasets.apps
I0906 23:42:23.201217 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
*
* ==> kube-controller-manager [21d8118aaae85c0abd9e0c3984c8750d71f77cc12e0ac1b70142cd71c0e71c3d] <==
* I0906 23:42:22.393557 1 event.go:307] "Event occurred" object="dockerenv-198309" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node dockerenv-198309 event: Registered Node dockerenv-198309 in Controller"
I0906 23:42:22.393639 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="dockerenv-198309"
I0906 23:42:22.393754 1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
I0906 23:42:22.399656 1 shared_informer.go:318] Caches are synced for daemon sets
I0906 23:42:22.399697 1 shared_informer.go:318] Caches are synced for GC
I0906 23:42:22.403954 1 shared_informer.go:318] Caches are synced for node
I0906 23:42:22.404005 1 range_allocator.go:174] "Sending events to api server"
I0906 23:42:22.404035 1 range_allocator.go:178] "Starting range CIDR allocator"
I0906 23:42:22.404042 1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
I0906 23:42:22.404051 1 shared_informer.go:318] Caches are synced for cidrallocator
I0906 23:42:22.410202 1 range_allocator.go:380] "Set node PodCIDR" node="dockerenv-198309" podCIDRs=["10.244.0.0/24"]
I0906 23:42:22.424015 1 shared_informer.go:318] Caches are synced for cronjob
I0906 23:42:22.480833 1 shared_informer.go:318] Caches are synced for resource quota
I0906 23:42:22.502831 1 shared_informer.go:318] Caches are synced for resource quota
I0906 23:42:22.822545 1 shared_informer.go:318] Caches are synced for garbage collector
I0906 23:42:22.899359 1 shared_informer.go:318] Caches are synced for garbage collector
I0906 23:42:22.899393 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
I0906 23:42:22.957632 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1"
I0906 23:42:23.209864 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r8mwq"
I0906 23:42:23.212021 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q9bjh"
I0906 23:42:23.327129 1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gb5v8"
I0906 23:42:23.334629 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="377.346518ms"
I0906 23:42:23.340916 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.233807ms"
I0906 23:42:23.341149 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.477µs"
I0906 23:42:23.348001 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="162.21µs"
*
* ==> kube-proxy [fd35967b6835ea0df8d4f2051a4c892a7f9979c97cabd16e2d577b0c3a9b8203] <==
* I0906 23:42:23.752905 1 server_others.go:69] "Using iptables proxy"
I0906 23:42:23.762714 1 node.go:141] Successfully retrieved node IP: 192.168.49.2
I0906 23:42:23.782076 1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0906 23:42:23.817437 1 server_others.go:152] "Using iptables Proxier"
I0906 23:42:23.817489 1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
I0906 23:42:23.817501 1 server_others.go:438] "Defaulting to no-op detect-local"
I0906 23:42:23.817541 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0906 23:42:23.817789 1 server.go:846] "Version info" version="v1.28.1"
I0906 23:42:23.817807 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0906 23:42:23.818413 1 config.go:188] "Starting service config controller"
I0906 23:42:23.818426 1 config.go:97] "Starting endpoint slice config controller"
I0906 23:42:23.818452 1 shared_informer.go:311] Waiting for caches to sync for service config
I0906 23:42:23.818452 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0906 23:42:23.819834 1 config.go:315] "Starting node config controller"
I0906 23:42:23.819854 1 shared_informer.go:311] Waiting for caches to sync for node config
I0906 23:42:23.918687 1 shared_informer.go:318] Caches are synced for service config
I0906 23:42:23.918754 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0906 23:42:23.920664 1 shared_informer.go:318] Caches are synced for node config
*
* ==> kube-scheduler [81b812022fc4c0ee4df53f90b657278061257f8e7284c7f4ce3845e8b68fd104] <==
* E0906 23:42:07.038553 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0906 23:42:07.038505 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0906 23:42:07.038575 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0906 23:42:07.038635 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0906 23:42:07.038653 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0906 23:42:07.038636 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0906 23:42:07.038783 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0906 23:42:07.038884 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0906 23:42:07.039039 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0906 23:42:07.039078 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0906 23:42:07.039102 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0906 23:42:07.039131 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0906 23:42:07.039480 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0906 23:42:07.039508 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0906 23:42:07.873186 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0906 23:42:07.873229 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0906 23:42:07.957086 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0906 23:42:07.957118 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0906 23:42:08.003404 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0906 23:42:08.003439 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0906 23:42:08.027838 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0906 23:42:08.027883 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0906 23:42:08.053651 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0906 23:42:08.053687 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0906 23:42:10.330045 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* Sep 06 23:42:22 dockerenv-198309 kubelet[1511]: I0906 23:42:22.485541 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e2420e85-890f-483d-aa91-c9efca2bb06e-tmp\") pod \"storage-provisioner\" (UID: \"e2420e85-890f-483d-aa91-c9efca2bb06e\") " pod="kube-system/storage-provisioner"
Sep 06 23:42:22 dockerenv-198309 kubelet[1511]: I0906 23:42:22.485601 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnxm\" (UniqueName: \"kubernetes.io/projected/e2420e85-890f-483d-aa91-c9efca2bb06e-kube-api-access-hpnxm\") pod \"storage-provisioner\" (UID: \"e2420e85-890f-483d-aa91-c9efca2bb06e\") " pod="kube-system/storage-provisioner"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.160973 1511 scope.go:117] "RemoveContainer" containerID="481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.215219 1511 topology_manager.go:215] "Topology Admit Handler" podUID="b16a92a9-b232-4fa9-b437-c1694e6728ac" podNamespace="kube-system" podName="kube-proxy-r8mwq"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.218295 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b16a92a9-b232-4fa9-b437-c1694e6728ac-kube-proxy\") pod \"kube-proxy-r8mwq\" (UID: \"b16a92a9-b232-4fa9-b437-c1694e6728ac\") " pod="kube-system/kube-proxy-r8mwq"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.218356 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq8wx\" (UniqueName: \"kubernetes.io/projected/b16a92a9-b232-4fa9-b437-c1694e6728ac-kube-api-access-lq8wx\") pod \"kube-proxy-r8mwq\" (UID: \"b16a92a9-b232-4fa9-b437-c1694e6728ac\") " pod="kube-system/kube-proxy-r8mwq"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.218406 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b16a92a9-b232-4fa9-b437-c1694e6728ac-xtables-lock\") pod \"kube-proxy-r8mwq\" (UID: \"b16a92a9-b232-4fa9-b437-c1694e6728ac\") " pod="kube-system/kube-proxy-r8mwq"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.218431 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b16a92a9-b232-4fa9-b437-c1694e6728ac-lib-modules\") pod \"kube-proxy-r8mwq\" (UID: \"b16a92a9-b232-4fa9-b437-c1694e6728ac\") " pod="kube-system/kube-proxy-r8mwq"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.221166 1511 topology_manager.go:215] "Topology Admit Handler" podUID="a9ed2b6a-a209-41c6-8cde-e896a26f2664" podNamespace="kube-system" podName="kindnet-q9bjh"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.318717 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9ed2b6a-a209-41c6-8cde-e896a26f2664-xtables-lock\") pod \"kindnet-q9bjh\" (UID: \"a9ed2b6a-a209-41c6-8cde-e896a26f2664\") " pod="kube-system/kindnet-q9bjh"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.318991 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpm4p\" (UniqueName: \"kubernetes.io/projected/a9ed2b6a-a209-41c6-8cde-e896a26f2664-kube-api-access-tpm4p\") pod \"kindnet-q9bjh\" (UID: \"a9ed2b6a-a209-41c6-8cde-e896a26f2664\") " pod="kube-system/kindnet-q9bjh"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.319041 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9ed2b6a-a209-41c6-8cde-e896a26f2664-lib-modules\") pod \"kindnet-q9bjh\" (UID: \"a9ed2b6a-a209-41c6-8cde-e896a26f2664\") " pod="kube-system/kindnet-q9bjh"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.319094 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a9ed2b6a-a209-41c6-8cde-e896a26f2664-cni-cfg\") pod \"kindnet-q9bjh\" (UID: \"a9ed2b6a-a209-41c6-8cde-e896a26f2664\") " pod="kube-system/kindnet-q9bjh"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.332709 1511 topology_manager.go:215] "Topology Admit Handler" podUID="3b4db2a2-c95a-40bf-be4a-d524690bfc7a" podNamespace="kube-system" podName="coredns-5dd5756b68-gb5v8"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.519680 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4db2a2-c95a-40bf-be4a-d524690bfc7a-config-volume\") pod \"coredns-5dd5756b68-gb5v8\" (UID: \"3b4db2a2-c95a-40bf-be4a-d524690bfc7a\") " pod="kube-system/coredns-5dd5756b68-gb5v8"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: I0906 23:42:23.519726 1511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb89b\" (UniqueName: \"kubernetes.io/projected/3b4db2a2-c95a-40bf-be4a-d524690bfc7a-kube-api-access-jb89b\") pod \"coredns-5dd5756b68-gb5v8\" (UID: \"3b4db2a2-c95a-40bf-be4a-d524690bfc7a\") " pod="kube-system/coredns-5dd5756b68-gb5v8"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: E0906 23:42:23.664036 1511 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\": failed to find network info for sandbox \"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\""
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: E0906 23:42:23.664115 1511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\": failed to find network info for sandbox \"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\"" pod="kube-system/coredns-5dd5756b68-gb5v8"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: E0906 23:42:23.664139 1511 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\": failed to find network info for sandbox \"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\"" pod="kube-system/coredns-5dd5756b68-gb5v8"
Sep 06 23:42:23 dockerenv-198309 kubelet[1511]: E0906 23:42:23.664198 1511 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-gb5v8_kube-system(3b4db2a2-c95a-40bf-be4a-d524690bfc7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-gb5v8_kube-system(3b4db2a2-c95a-40bf-be4a-d524690bfc7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\\\": failed to find network info for sandbox \\\"e7fd84516631fb03a0280a33d94fcaadb959dbaec6ba2936386cc3cc403c2c99\\\"\"" pod="kube-system/coredns-5dd5756b68-gb5v8" podUID="3b4db2a2-c95a-40bf-be4a-d524690bfc7a"
Sep 06 23:42:24 dockerenv-198309 kubelet[1511]: I0906 23:42:24.165221 1511 scope.go:117] "RemoveContainer" containerID="481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec"
Sep 06 23:42:24 dockerenv-198309 kubelet[1511]: I0906 23:42:24.165557 1511 scope.go:117] "RemoveContainer" containerID="f40f12bd36f71009472d584285a195a8bc828ad3a32d7aa0e87b3873380a4b7b"
Sep 06 23:42:24 dockerenv-198309 kubelet[1511]: E0906 23:42:24.165839 1511 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e2420e85-890f-483d-aa91-c9efca2bb06e)\"" pod="kube-system/storage-provisioner" podUID="e2420e85-890f-483d-aa91-c9efca2bb06e"
Sep 06 23:42:24 dockerenv-198309 kubelet[1511]: I0906 23:42:24.196062 1511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r8mwq" podStartSLOduration=1.196012036 podCreationTimestamp="2023-09-06 23:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 23:42:24.187099058 +0000 UTC m=+14.240138779" watchObservedRunningTime="2023-09-06 23:42:24.196012036 +0000 UTC m=+14.249051766"
Sep 06 23:42:24 dockerenv-198309 kubelet[1511]: I0906 23:42:24.196317 1511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-q9bjh" podStartSLOduration=1.196171201 podCreationTimestamp="2023-09-06 23:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-06 23:42:24.195872163 +0000 UTC m=+14.248911892" watchObservedRunningTime="2023-09-06 23:42:24.196171201 +0000 UTC m=+14.249210929"
*
* ==> storage-provisioner [481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec] <==
*
* ==> storage-provisioner [f40f12bd36f71009472d584285a195a8bc828ad3a32d7aa0e87b3873380a4b7b] <==
* I0906 23:42:23.253907 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0906 23:42:23.255189 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
-- /stdout --
** stderr **
E0906 23:42:24.676188 596552 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec": Process exited with status 1
stdout:
stderr:
E0906 23:42:24.673466 2460 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec\": not found" containerID="481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec"
time="2023-09-06T23:42:24Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec\": not found"
output: "\n** stderr ** \nE0906 23:42:24.673466 2460 remote_runtime.go:625] \"ContainerStatus from runtime service failed\" err=\"rpc error: code = NotFound desc = an error occurred when try to find container \\\"481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec\\\": not found\" containerID=\"481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec\"\ntime=\"2023-09-06T23:42:24Z\" level=fatal msg=\"rpc error: code = NotFound desc = an error occurred when try to find container \\\"481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec\\\": not found\"\n\n** /stderr **"
! unable to fetch logs for: storage-provisioner [481ad4ba0e372743d13998503053ebb9dd8fd042ca95766c698a10f531978bec]
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-198309 -n dockerenv-198309
helpers_test.go:261: (dbg) Run: kubectl --context dockerenv-198309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5dd5756b68-gb5v8
helpers_test.go:274: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context dockerenv-198309 describe pod coredns-5dd5756b68-gb5v8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context dockerenv-198309 describe pod coredns-5dd5756b68-gb5v8: exit status 1 (64.056495ms)
** stderr **
Error from server (NotFound): pods "coredns-5dd5756b68-gb5v8" not found
** /stderr **
helpers_test.go:279: kubectl --context dockerenv-198309 describe pod coredns-5dd5756b68-gb5v8: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-198309" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p dockerenv-198309
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-198309: (1.849713007s)
--- FAIL: TestDockerEnvContainerd (38.96s)