=== RUN TestPause/serial/Start
pause_test.go:80: (dbg) Run: out/minikube-linux-amd64 start -p pause-507725 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=containerd
pause_test.go:80: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-507725 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=containerd: exit status 80 (9m51.162586746s)
-- stdout --
* [pause-507725] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20535
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "pause-507725" primary control-plane node in "pause-507725" cluster
* Pulling base image v0.0.46-1741860993-20523 ...
* Creating docker container (CPUs=2, Memory=2048MB) ...
* Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
-- /stdout --
** stderr **
E0317 10:59:37.531758 245681 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-7h92s" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-7h92s" not found
E0317 11:03:37.537353 245681 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-linux-amd64 start -p pause-507725 --memory=2048 --install-addons=false --wait=all --driver=docker --container-runtime=containerd" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-507725
helpers_test.go:235: (dbg) docker inspect pause-507725:
-- stdout --
[
{
"Id": "1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98",
"Created": "2025-03-17T10:59:16.245373249Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 246739,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-03-17T10:59:16.277769413Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
"ResolvConfPath": "/var/lib/docker/containers/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98/hostname",
"HostsPath": "/var/lib/docker/containers/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98/hosts",
"LogPath": "/var/lib/docker/containers/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98/1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98-json.log",
"Name": "/pause-507725",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-507725:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "pause-507725",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "1ec91abc09f598f912ed7d2f18a04156899b159ba821f40e7236ffaa2d0b6a98",
"LowerDir": "/var/lib/docker/overlay2/1fd5adf33cc8e12ca69da7d6b9e0be4e2bfed7ed52ec6638dc21a161e2e4e6bd-init/diff:/var/lib/docker/overlay2/c513cb32e4b42c4b2e1258d7197e5cd39dcbb3306943490e9747416948e6aaf6/diff",
"MergedDir": "/var/lib/docker/overlay2/1fd5adf33cc8e12ca69da7d6b9e0be4e2bfed7ed52ec6638dc21a161e2e4e6bd/merged",
"UpperDir": "/var/lib/docker/overlay2/1fd5adf33cc8e12ca69da7d6b9e0be4e2bfed7ed52ec6638dc21a161e2e4e6bd/diff",
"WorkDir": "/var/lib/docker/overlay2/1fd5adf33cc8e12ca69da7d6b9e0be4e2bfed7ed52ec6638dc21a161e2e4e6bd/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-507725",
"Source": "/var/lib/docker/volumes/pause-507725/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-507725",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-507725",
"name.minikube.sigs.k8s.io": "pause-507725",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "eca5c5922aba6b69430c0e74806f2f880eab4aac4913a892d86b9b1e948a4045",
"SandboxKey": "/var/run/docker/netns/eca5c5922aba",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33048"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33049"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33052"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33050"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33051"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-507725": {
"IPAMConfig": {
"IPv4Address": "192.168.103.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "72:28:05:3d:9a:5b",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "7305c82bb37b2a024025f05e887ad87dca42b0a81244e064bd8ebd79b0338eef",
"EndpointID": "0d5ff564a65b39555ce3272ccb52e242ce246206d9901e420d4506dbf3ae438d",
"Gateway": "192.168.103.1",
"IPAddress": "192.168.103.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"pause-507725",
"1ec91abc09f5"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-507725 -n pause-507725
helpers_test.go:244: <<< TestPause/serial/Start FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/Start]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-507725 logs -n 25
helpers_test.go:252: TestPause/serial/Start logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| stop | -p kubernetes-upgrade-038579 | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 10:57 UTC | 17 Mar 25 10:57 UTC |
| start | -p kubernetes-upgrade-038579 | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 10:57 UTC | 17 Mar 25 11:02 UTC |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p missing-upgrade-397855 | missing-upgrade-397855 | jenkins | v1.35.0 | 17 Mar 25 10:57 UTC | 17 Mar 25 10:58 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p running-upgrade-443193 | running-upgrade-443193 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
| start | -p force-systemd-flag-408852 | force-systemd-flag-408852 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p missing-upgrade-397855 | missing-upgrade-397855 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
| ssh | force-systemd-flag-408852 | force-systemd-flag-408852 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-flag-408852 | force-systemd-flag-408852 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:58 UTC |
| start | -p cert-options-442523 | cert-options-442523 | jenkins | v1.35.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:59 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p stopped-upgrade-873690 | minikube | jenkins | v1.26.0 | 17 Mar 25 10:58 UTC | 17 Mar 25 10:59 UTC |
| | --memory=2200 | | | | | |
| | --vm-driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-442523 ssh | cert-options-442523 | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-442523 -- sudo | cert-options-442523 | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-442523 | cert-options-442523 | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
| start | -p pause-507725 --memory=2048 | pause-507725 | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| stop | stopped-upgrade-873690 stop | minikube | jenkins | v1.26.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
| start | -p stopped-upgrade-873690 | stopped-upgrade-873690 | jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 11:00 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p stopped-upgrade-873690 | stopped-upgrade-873690 | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC | 17 Mar 25 11:00 UTC |
| start | -p auto-236437 --memory=3072 | auto-236437 | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p cert-expiration-196744 | cert-expiration-196744 | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC | 17 Mar 25 11:00 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-196744 | cert-expiration-196744 | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC | 17 Mar 25 11:00 UTC |
| start | -p kindnet-236437 | kindnet-236437 | jenkins | v1.35.0 | 17 Mar 25 11:00 UTC | |
| | --memory=3072 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --cni=kindnet --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p kubernetes-upgrade-038579 | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 11:02 UTC | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p kubernetes-upgrade-038579 | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 11:02 UTC | 17 Mar 25 11:02 UTC |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p kubernetes-upgrade-038579 | kubernetes-upgrade-038579 | jenkins | v1.35.0 | 17 Mar 25 11:02 UTC | 17 Mar 25 11:02 UTC |
| start | -p calico-236437 --memory=3072 | calico-236437 | jenkins | v1.35.0 | 17 Mar 25 11:02 UTC | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=15m | | | | | |
| | --cni=calico --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/03/17 11:02:24
Running on machine: ubuntu-20-agent-6
Binary: Built with gc go1.24.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0317 11:02:24.880858 271403 out.go:345] Setting OutFile to fd 1 ...
I0317 11:02:24.881135 271403 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 11:02:24.881147 271403 out.go:358] Setting ErrFile to fd 2...
I0317 11:02:24.881151 271403 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 11:02:24.881334 271403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-4918/.minikube/bin
I0317 11:02:24.882486 271403 out.go:352] Setting JSON to false
I0317 11:02:24.884073 271403 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2638,"bootTime":1742206707,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0317 11:02:24.884163 271403 start.go:139] virtualization: kvm guest
I0317 11:02:24.885681 271403 out.go:177] * [calico-236437] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0317 11:02:24.887539 271403 out.go:177] - MINIKUBE_LOCATION=20535
I0317 11:02:24.887565 271403 notify.go:220] Checking for updates...
I0317 11:02:24.889529 271403 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0317 11:02:24.890553 271403 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20535-4918/kubeconfig
I0317 11:02:24.891476 271403 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-4918/.minikube
I0317 11:02:24.892387 271403 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0317 11:02:24.893262 271403 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0317 11:02:24.894457 271403 config.go:182] Loaded profile config "auto-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 11:02:24.894580 271403 config.go:182] Loaded profile config "kindnet-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 11:02:24.894677 271403 config.go:182] Loaded profile config "pause-507725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 11:02:24.894762 271403 driver.go:394] Setting default libvirt URI to qemu:///system
I0317 11:02:24.918017 271403 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
I0317 11:02:24.918114 271403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0317 11:02:24.969860 271403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:02:24.960688592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0317 11:02:24.969970 271403 docker.go:318] overlay module found
I0317 11:02:24.971694 271403 out.go:177] * Using the docker driver based on user configuration
I0317 11:02:24.972796 271403 start.go:297] selected driver: docker
I0317 11:02:24.972809 271403 start.go:901] validating driver "docker" against <nil>
I0317 11:02:24.972827 271403 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0317 11:02:24.973657 271403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0317 11:02:25.022032 271403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 11:02:25.012636564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0317 11:02:25.022160 271403 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0317 11:02:25.022392 271403 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0317 11:02:25.023911 271403 out.go:177] * Using Docker driver with root privileges
I0317 11:02:25.024881 271403 cni.go:84] Creating CNI manager for "calico"
I0317 11:02:25.024899 271403 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
I0317 11:02:25.024977 271403 start.go:340] cluster config:
{Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0317 11:02:25.026106 271403 out.go:177] * Starting "calico-236437" primary control-plane node in "calico-236437" cluster
I0317 11:02:25.027136 271403 cache.go:121] Beginning downloading kic base image for docker with containerd
I0317 11:02:25.028276 271403 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0317 11:02:25.029237 271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0317 11:02:25.029286 271403 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
I0317 11:02:25.029305 271403 cache.go:56] Caching tarball of preloaded images
I0317 11:02:25.029318 271403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0317 11:02:25.029388 271403 preload.go:172] Found /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0317 11:02:25.029403 271403 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0317 11:02:25.029535 271403 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json ...
I0317 11:02:25.029562 271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json: {Name:mka28e5f5151a7bb8665b9fadb1eddd447540b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:25.050614 271403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0317 11:02:25.050633 271403 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0317 11:02:25.050647 271403 cache.go:230] Successfully downloaded all kic artifacts
I0317 11:02:25.050674 271403 start.go:360] acquireMachinesLock for calico-236437: {Name:mka22ede0df163978b69124089e295c5c09c2417 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 11:02:25.050757 271403 start.go:364] duration metric: took 70.02µs to acquireMachinesLock for "calico-236437"
I0317 11:02:25.050781 271403 start.go:93] Provisioning new machine with config: &{Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0317 11:02:25.050872 271403 start.go:125] createHost starting for "" (driver="docker")
I0317 11:02:23.037623 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:25.037658 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:23.149814 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:25.650135 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:24.534023 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:26.534079 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:29.034382 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:25.052899 271403 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
I0317 11:02:25.053169 271403 start.go:159] libmachine.API.Create for "calico-236437" (driver="docker")
I0317 11:02:25.053195 271403 client.go:168] LocalClient.Create starting
I0317 11:02:25.053249 271403 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem
I0317 11:02:25.053279 271403 main.go:141] libmachine: Decoding PEM data...
I0317 11:02:25.053293 271403 main.go:141] libmachine: Parsing certificate...
I0317 11:02:25.053336 271403 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem
I0317 11:02:25.053354 271403 main.go:141] libmachine: Decoding PEM data...
I0317 11:02:25.053364 271403 main.go:141] libmachine: Parsing certificate...
I0317 11:02:25.053671 271403 cli_runner.go:164] Run: docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0317 11:02:25.069801 271403 cli_runner.go:211] docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0317 11:02:25.069854 271403 network_create.go:284] running [docker network inspect calico-236437] to gather additional debugging logs...
I0317 11:02:25.069871 271403 cli_runner.go:164] Run: docker network inspect calico-236437
W0317 11:02:25.086515 271403 cli_runner.go:211] docker network inspect calico-236437 returned with exit code 1
I0317 11:02:25.086545 271403 network_create.go:287] error running [docker network inspect calico-236437]: docker network inspect calico-236437: exit status 1
stdout:
[]
stderr:
Error response from daemon: network calico-236437 not found
I0317 11:02:25.086566 271403 network_create.go:289] output of [docker network inspect calico-236437]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network calico-236437 not found
** /stderr **
I0317 11:02:25.086714 271403 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0317 11:02:25.103494 271403 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6a2ef9d4bc68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:9a:4d:91:26:57:2c} reservation:<nil>}
I0317 11:02:25.104219 271403 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-00bf62ef0133 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:c5:34:86:d6:21} reservation:<nil>}
I0317 11:02:25.104910 271403 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-81e0001ceae7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:6a:cf:1c:79:e6} reservation:<nil>}
I0317 11:02:25.105515 271403 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-16edb2a113e3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d6:59:06:a9:a8:e8} reservation:<nil>}
I0317 11:02:25.106325 271403 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d7f060}
I0317 11:02:25.106346 271403 network_create.go:124] attempt to create docker network calico-236437 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0317 11:02:25.106383 271403 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-236437 calico-236437
I0317 11:02:25.157870 271403 network_create.go:108] docker network calico-236437 192.168.85.0/24 created
I0317 11:02:25.157905 271403 kic.go:121] calculated static IP "192.168.85.2" for the "calico-236437" container
I0317 11:02:25.157997 271403 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0317 11:02:25.175038 271403 cli_runner.go:164] Run: docker volume create calico-236437 --label name.minikube.sigs.k8s.io=calico-236437 --label created_by.minikube.sigs.k8s.io=true
I0317 11:02:25.193023 271403 oci.go:103] Successfully created a docker volume calico-236437
I0317 11:02:25.193103 271403 cli_runner.go:164] Run: docker run --rm --name calico-236437-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-236437 --entrypoint /usr/bin/test -v calico-236437:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
I0317 11:02:25.607335 271403 oci.go:107] Successfully prepared a docker volume calico-236437
I0317 11:02:25.607382 271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0317 11:02:25.607404 271403 kic.go:194] Starting extracting preloaded images to volume ...
I0317 11:02:25.607460 271403 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-236437:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
I0317 11:02:27.537536 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:30.036900 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:28.149376 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:30.649199 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:30.089006 271403 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20535-4918/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-236437:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.481483792s)
I0317 11:02:30.089037 271403 kic.go:203] duration metric: took 4.481630761s to extract preloaded images to volume ...
W0317 11:02:30.089153 271403 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0317 11:02:30.089236 271403 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0317 11:02:30.143191 271403 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-236437 --name calico-236437 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-236437 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-236437 --network calico-236437 --ip 192.168.85.2 --volume calico-236437:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
I0317 11:02:30.402985 271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Running}}
I0317 11:02:30.421737 271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
I0317 11:02:30.443380 271403 cli_runner.go:164] Run: docker exec calico-236437 stat /var/lib/dpkg/alternatives/iptables
I0317 11:02:30.487803 271403 oci.go:144] the created container "calico-236437" has a running status.
I0317 11:02:30.487842 271403 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa...
I0317 11:02:30.966099 271403 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0317 11:02:30.989095 271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
I0317 11:02:31.006629 271403 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0317 11:02:31.006654 271403 kic_runner.go:114] Args: [docker exec --privileged calico-236437 chown docker:docker /home/docker/.ssh/authorized_keys]
I0317 11:02:31.052822 271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
I0317 11:02:31.073514 271403 machine.go:93] provisionDockerMachine start ...
I0317 11:02:31.073608 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:31.091435 271403 main.go:141] libmachine: Using SSH client type: native
I0317 11:02:31.091672 271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0317 11:02:31.091683 271403 main.go:141] libmachine: About to run SSH command:
hostname
I0317 11:02:31.230753 271403 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-236437
I0317 11:02:31.230782 271403 ubuntu.go:169] provisioning hostname "calico-236437"
I0317 11:02:31.230855 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:31.248577 271403 main.go:141] libmachine: Using SSH client type: native
I0317 11:02:31.248869 271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0317 11:02:31.248892 271403 main.go:141] libmachine: About to run SSH command:
sudo hostname calico-236437 && echo "calico-236437" | sudo tee /etc/hostname
I0317 11:02:31.389908 271403 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-236437
I0317 11:02:31.390001 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:31.407223 271403 main.go:141] libmachine: Using SSH client type: native
I0317 11:02:31.407517 271403 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0317 11:02:31.407545 271403 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\scalico-236437' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-236437/g' /etc/hosts;
else
echo '127.0.1.1 calico-236437' | sudo tee -a /etc/hosts;
fi
fi
I0317 11:02:31.543474 271403 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0317 11:02:31.543500 271403 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20535-4918/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-4918/.minikube}
I0317 11:02:31.543521 271403 ubuntu.go:177] setting up certificates
I0317 11:02:31.543534 271403 provision.go:84] configureAuth start
I0317 11:02:31.543589 271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
I0317 11:02:31.561231 271403 provision.go:143] copyHostCerts
I0317 11:02:31.561284 271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem, removing ...
I0317 11:02:31.561292 271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem
I0317 11:02:31.561354 271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/ca.pem (1082 bytes)
I0317 11:02:31.561446 271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem, removing ...
I0317 11:02:31.561454 271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem
I0317 11:02:31.561478 271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/cert.pem (1123 bytes)
I0317 11:02:31.561530 271403 exec_runner.go:144] found /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem, removing ...
I0317 11:02:31.561537 271403 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem
I0317 11:02:31.561562 271403 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-4918/.minikube/key.pem (1679 bytes)
I0317 11:02:31.561607 271403 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem org=jenkins.calico-236437 san=[127.0.0.1 192.168.85.2 calico-236437 localhost minikube]
I0317 11:02:31.992225 271403 provision.go:177] copyRemoteCerts
I0317 11:02:31.992284 271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0317 11:02:31.992319 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:32.009677 271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
I0317 11:02:32.104042 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0317 11:02:32.126981 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0317 11:02:32.149635 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0317 11:02:32.172473 271403 provision.go:87] duration metric: took 628.925048ms to configureAuth
I0317 11:02:32.172509 271403 ubuntu.go:193] setting minikube options for container-runtime
I0317 11:02:32.172673 271403 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 11:02:32.172685 271403 machine.go:96] duration metric: took 1.099153553s to provisionDockerMachine
I0317 11:02:32.172692 271403 client.go:171] duration metric: took 7.119491835s to LocalClient.Create
I0317 11:02:32.172711 271403 start.go:167] duration metric: took 7.119541902s to libmachine.API.Create "calico-236437"
I0317 11:02:32.172723 271403 start.go:293] postStartSetup for "calico-236437" (driver="docker")
I0317 11:02:32.172734 271403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0317 11:02:32.172782 271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0317 11:02:32.172832 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:32.189861 271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
I0317 11:02:32.284036 271403 ssh_runner.go:195] Run: cat /etc/os-release
I0317 11:02:32.287202 271403 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0317 11:02:32.287240 271403 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0317 11:02:32.287285 271403 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0317 11:02:32.287295 271403 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0317 11:02:32.287311 271403 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/addons for local assets ...
I0317 11:02:32.287361 271403 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-4918/.minikube/files for local assets ...
I0317 11:02:32.287433 271403 filesync.go:149] local asset: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem -> 116902.pem in /etc/ssl/certs
I0317 11:02:32.287518 271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0317 11:02:32.295619 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /etc/ssl/certs/116902.pem (1708 bytes)
I0317 11:02:32.317674 271403 start.go:296] duration metric: took 144.936846ms for postStartSetup
I0317 11:02:32.318040 271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
I0317 11:02:32.335236 271403 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/config.json ...
I0317 11:02:32.335512 271403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0317 11:02:32.335547 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:32.351723 271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
I0317 11:02:32.444147 271403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0317 11:02:32.448601 271403 start.go:128] duration metric: took 7.397705312s to createHost
I0317 11:02:32.448627 271403 start.go:83] releasing machines lock for "calico-236437", held for 7.39785815s
I0317 11:02:32.448708 271403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-236437
I0317 11:02:32.467676 271403 ssh_runner.go:195] Run: cat /version.json
I0317 11:02:32.467727 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:32.467758 271403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0317 11:02:32.467811 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:32.485718 271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
I0317 11:02:32.485824 271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
I0317 11:02:32.657328 271403 ssh_runner.go:195] Run: systemctl --version
I0317 11:02:32.661411 271403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0317 11:02:32.665794 271403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0317 11:02:32.689140 271403 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0317 11:02:32.689229 271403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0317 11:02:32.714533 271403 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0317 11:02:32.714561 271403 start.go:495] detecting cgroup driver to use...
I0317 11:02:32.714602 271403 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0317 11:02:32.714651 271403 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0317 11:02:32.726430 271403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0317 11:02:32.736704 271403 docker.go:217] disabling cri-docker service (if available) ...
I0317 11:02:32.736750 271403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0317 11:02:32.749237 271403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0317 11:02:32.762021 271403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0317 11:02:32.837408 271403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0317 11:02:32.915411 271403 docker.go:233] disabling docker service ...
I0317 11:02:32.915475 271403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0317 11:02:32.934753 271403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0317 11:02:32.945339 271403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0317 11:02:33.026602 271403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0317 11:02:33.105023 271403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0317 11:02:33.115410 271403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0317 11:02:33.130129 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0317 11:02:33.139140 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0317 11:02:33.148241 271403 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0317 11:02:33.148304 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0317 11:02:33.156976 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0317 11:02:33.165716 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0317 11:02:33.174440 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0317 11:02:33.183153 271403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0317 11:02:33.191608 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0317 11:02:33.200222 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0317 11:02:33.208828 271403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0317 11:02:33.217773 271403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0317 11:02:33.225411 271403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0317 11:02:33.233211 271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 11:02:33.313024 271403 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0317 11:02:33.412133 271403 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0317 11:02:33.412208 271403 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0317 11:02:33.415675 271403 start.go:563] Will wait 60s for crictl version
I0317 11:02:33.415723 271403 ssh_runner.go:195] Run: which crictl
I0317 11:02:33.418802 271403 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0317 11:02:33.454942 271403 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0317 11:02:33.455012 271403 ssh_runner.go:195] Run: containerd --version
I0317 11:02:33.477807 271403 ssh_runner.go:195] Run: containerd --version
I0317 11:02:33.501834 271403 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
I0317 11:02:31.533659 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:33.534559 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:33.502865 271403 cli_runner.go:164] Run: docker network inspect calico-236437 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0317 11:02:33.521053 271403 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0317 11:02:33.524629 271403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0317 11:02:33.535881 271403 kubeadm.go:883] updating cluster {Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0317 11:02:33.536009 271403 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0317 11:02:33.536072 271403 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 11:02:33.567514 271403 containerd.go:627] all images are preloaded for containerd runtime.
I0317 11:02:33.567533 271403 containerd.go:534] Images already preloaded, skipping extraction
I0317 11:02:33.567587 271403 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 11:02:33.598171 271403 containerd.go:627] all images are preloaded for containerd runtime.
I0317 11:02:33.598192 271403 cache_images.go:84] Images are preloaded, skipping loading
I0317 11:02:33.598199 271403 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
I0317 11:02:33.598293 271403 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-236437 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
I0317 11:02:33.598353 271403 ssh_runner.go:195] Run: sudo crictl info
I0317 11:02:33.630316 271403 cni.go:84] Creating CNI manager for "calico"
I0317 11:02:33.630339 271403 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0317 11:02:33.630359 271403 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-236437 NodeName:calico-236437 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0317 11:02:33.630477 271403 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "calico-236437"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0317 11:02:33.630528 271403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0317 11:02:33.638862 271403 binaries.go:44] Found k8s binaries, skipping transfer
I0317 11:02:33.638928 271403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0317 11:02:33.647870 271403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
I0317 11:02:33.664419 271403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0317 11:02:33.680721 271403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
I0317 11:02:33.697486 271403 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0317 11:02:33.700806 271403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0317 11:02:33.710885 271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 11:02:33.789041 271403 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0317 11:02:33.801846 271403 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437 for IP: 192.168.85.2
I0317 11:02:33.801877 271403 certs.go:194] generating shared ca certs ...
I0317 11:02:33.801896 271403 certs.go:226] acquiring lock for ca certs: {Name:mkf58624c63680e02907d28348d45986283847c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:33.802058 271403 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key
I0317 11:02:33.802123 271403 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key
I0317 11:02:33.802137 271403 certs.go:256] generating profile certs ...
I0317 11:02:33.802202 271403 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key
I0317 11:02:33.802228 271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt with IP's: []
I0317 11:02:33.992607 271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt ...
I0317 11:02:33.992636 271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.crt: {Name:mkb52ca2b7d5614e9a99d0baa0ecbebaddb0cc98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:33.992801 271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key ...
I0317 11:02:33.992819 271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/client.key: {Name:mk35db6f772b5eb0d0f9eef0f32d9e01b2c6129c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:33.992895 271403 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4
I0317 11:02:33.992909 271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0317 11:02:34.206081 271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 ...
I0317 11:02:34.206116 271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4: {Name:mk106a12a3266907a0c64fdec49d2d65cff8ef4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:34.206307 271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4 ...
I0317 11:02:34.206328 271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4: {Name:mkb761c01ac7dd169e99815f4912e839650faba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:34.206446 271403 certs.go:381] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt.916c13d4 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt
I0317 11:02:34.206543 271403 certs.go:385] copying /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key.916c13d4 -> /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key
I0317 11:02:34.206635 271403 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key
I0317 11:02:34.206657 271403 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt with IP's: []
I0317 11:02:34.324068 271403 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt ...
I0317 11:02:34.324097 271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt: {Name:mk823c22b3bc8a80bc3c82b282af79b6abc16d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:34.324254 271403 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key ...
I0317 11:02:34.324267 271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key: {Name:mk875be3f1f3630e7e6086d3ef46f0bec9649fb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:34.324420 271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem (1338 bytes)
W0317 11:02:34.324451 271403 certs.go:480] ignoring /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690_empty.pem, impossibly tiny 0 bytes
I0317 11:02:34.324461 271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca-key.pem (1675 bytes)
I0317 11:02:34.324494 271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/ca.pem (1082 bytes)
I0317 11:02:34.324524 271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/cert.pem (1123 bytes)
I0317 11:02:34.324558 271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/certs/key.pem (1679 bytes)
I0317 11:02:34.324619 271403 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem (1708 bytes)
I0317 11:02:34.325244 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0317 11:02:34.348013 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0317 11:02:34.369328 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0317 11:02:34.391242 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0317 11:02:34.413233 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0317 11:02:34.434100 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0317 11:02:34.458186 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0317 11:02:34.481676 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/profiles/calico-236437/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0317 11:02:34.505221 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0317 11:02:34.527325 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/certs/11690.pem --> /usr/share/ca-certificates/11690.pem (1338 bytes)
I0317 11:02:34.551519 271403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-4918/.minikube/files/etc/ssl/certs/116902.pem --> /usr/share/ca-certificates/116902.pem (1708 bytes)
I0317 11:02:34.572901 271403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0317 11:02:34.588811 271403 ssh_runner.go:195] Run: openssl version
I0317 11:02:34.593841 271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11690.pem && ln -fs /usr/share/ca-certificates/11690.pem /etc/ssl/certs/11690.pem"
I0317 11:02:34.602126 271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11690.pem
I0317 11:02:34.605246 271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:32 /usr/share/ca-certificates/11690.pem
I0317 11:02:34.605299 271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11690.pem
I0317 11:02:34.611760 271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11690.pem /etc/ssl/certs/51391683.0"
I0317 11:02:34.619902 271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/116902.pem && ln -fs /usr/share/ca-certificates/116902.pem /etc/ssl/certs/116902.pem"
I0317 11:02:34.627931 271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/116902.pem
I0317 11:02:34.631011 271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:32 /usr/share/ca-certificates/116902.pem
I0317 11:02:34.631053 271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/116902.pem
I0317 11:02:34.637206 271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/116902.pem /etc/ssl/certs/3ec20f2e.0"
I0317 11:02:34.646079 271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0317 11:02:34.654752 271403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0317 11:02:34.657906 271403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
I0317 11:02:34.657954 271403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0317 11:02:34.664388 271403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0317 11:02:34.673111 271403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0317 11:02:34.676159 271403 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0317 11:02:34.676200 271403 kubeadm.go:392] StartCluster: {Name:calico-236437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:calico-236437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0317 11:02:34.676252 271403 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0317 11:02:34.676286 271403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0317 11:02:34.710371 271403 cri.go:89] found id: ""
I0317 11:02:34.710443 271403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0317 11:02:34.720254 271403 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0317 11:02:34.728439 271403 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0317 11:02:34.728511 271403 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0317 11:02:34.736684 271403 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0317 11:02:34.736699 271403 kubeadm.go:157] found existing configuration files:
I0317 11:02:34.736730 271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0317 11:02:34.744549 271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0317 11:02:34.744604 271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0317 11:02:34.752129 271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0317 11:02:34.760012 271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0317 11:02:34.760069 271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0317 11:02:34.767476 271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0317 11:02:34.775057 271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0317 11:02:34.775105 271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0317 11:02:34.782810 271403 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0317 11:02:34.790578 271403 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0317 11:02:34.790624 271403 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0317 11:02:34.797888 271403 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0317 11:02:34.833333 271403 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0317 11:02:34.833405 271403 kubeadm.go:310] [preflight] Running pre-flight checks
I0317 11:02:34.849583 271403 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0317 11:02:34.849687 271403 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1078-gcp[0m
I0317 11:02:34.849745 271403 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0317 11:02:34.849817 271403 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0317 11:02:34.849899 271403 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0317 11:02:34.849997 271403 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0317 11:02:34.850078 271403 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0317 11:02:34.850154 271403 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0317 11:02:34.850217 271403 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0317 11:02:34.850265 271403 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0317 11:02:34.850312 271403 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0317 11:02:34.850353 271403 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0317 11:02:34.904813 271403 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0317 11:02:34.904974 271403 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0317 11:02:34.905103 271403 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0317 11:02:34.909905 271403 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0317 11:02:32.037038 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:34.537345 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:33.148942 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:35.648977 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:34.911531 271403 out.go:235] - Generating certificates and keys ...
I0317 11:02:34.911635 271403 kubeadm.go:310] [certs] Using existing ca certificate authority
I0317 11:02:34.911736 271403 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0317 11:02:35.268722 271403 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0317 11:02:35.468484 271403 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0317 11:02:35.769348 271403 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0317 11:02:35.993040 271403 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0317 11:02:36.202807 271403 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0317 11:02:36.203004 271403 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-236437 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0317 11:02:36.280951 271403 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0317 11:02:36.281084 271403 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-236437 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0317 11:02:36.463620 271403 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0317 11:02:36.510242 271403 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0317 11:02:36.900000 271403 kubeadm.go:310] [certs] Generating "sa" key and public key
I0317 11:02:36.900111 271403 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0317 11:02:37.075436 271403 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0317 11:02:37.263196 271403 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0317 11:02:37.642492 271403 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0317 11:02:37.737086 271403 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0317 11:02:38.040875 271403 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0317 11:02:38.041549 271403 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0317 11:02:38.043872 271403 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0317 11:02:36.034091 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:38.533914 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:38.045834 271403 out.go:235] - Booting up control plane ...
I0317 11:02:38.045950 271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0317 11:02:38.046019 271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0317 11:02:38.046719 271403 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0317 11:02:38.056299 271403 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0317 11:02:38.061457 271403 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0317 11:02:38.061534 271403 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0317 11:02:38.143998 271403 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0317 11:02:38.144138 271403 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0317 11:02:38.645417 271403 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.431671ms
I0317 11:02:38.645515 271403 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0317 11:02:37.037283 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:39.537378 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:41.537760 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:37.649404 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:40.148990 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:43.147383 271403 kubeadm.go:310] [api-check] The API server is healthy after 4.501934621s
I0317 11:02:43.158723 271403 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0317 11:02:43.168464 271403 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0317 11:02:43.184339 271403 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0317 11:02:43.184609 271403 kubeadm.go:310] [mark-control-plane] Marking the node calico-236437 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0317 11:02:43.191081 271403 kubeadm.go:310] [bootstrap-token] Using token: mixhu0.4ggx0rlksl4xdr10
I0317 11:02:40.534081 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:42.534658 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:43.192582 271403 out.go:235] - Configuring RBAC rules ...
I0317 11:02:43.192739 271403 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0317 11:02:43.196215 271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0317 11:02:43.200588 271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0317 11:02:43.202942 271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0317 11:02:43.205272 271403 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0317 11:02:43.207452 271403 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0317 11:02:43.553368 271403 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0317 11:02:43.969959 271403 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0317 11:02:44.553346 271403 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0317 11:02:44.554242 271403 kubeadm.go:310]
I0317 11:02:44.554342 271403 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0317 11:02:44.554359 271403 kubeadm.go:310]
I0317 11:02:44.554471 271403 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0317 11:02:44.554492 271403 kubeadm.go:310]
I0317 11:02:44.554522 271403 kubeadm.go:310] mkdir -p $HOME/.kube
I0317 11:02:44.554611 271403 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0317 11:02:44.554704 271403 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0317 11:02:44.554722 271403 kubeadm.go:310]
I0317 11:02:44.554806 271403 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0317 11:02:44.554816 271403 kubeadm.go:310]
I0317 11:02:44.554894 271403 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0317 11:02:44.554903 271403 kubeadm.go:310]
I0317 11:02:44.554993 271403 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0317 11:02:44.555106 271403 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0317 11:02:44.555207 271403 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0317 11:02:44.555217 271403 kubeadm.go:310]
I0317 11:02:44.555395 271403 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0317 11:02:44.555506 271403 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0317 11:02:44.555523 271403 kubeadm.go:310]
I0317 11:02:44.555637 271403 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mixhu0.4ggx0rlksl4xdr10 \
I0317 11:02:44.555775 271403 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578 \
I0317 11:02:44.555807 271403 kubeadm.go:310] --control-plane
I0317 11:02:44.555816 271403 kubeadm.go:310]
I0317 11:02:44.555924 271403 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0317 11:02:44.555932 271403 kubeadm.go:310]
I0317 11:02:44.556026 271403 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mixhu0.4ggx0rlksl4xdr10 \
I0317 11:02:44.556149 271403 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:fbbd8e832ea7aa08371d4fcc88b71c8e29c98bed7a9a4feed9bf5043f7b52578
I0317 11:02:44.558534 271403 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0317 11:02:44.558760 271403 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
I0317 11:02:44.558854 271403 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0317 11:02:44.558879 271403 cni.go:84] Creating CNI manager for "calico"
I0317 11:02:44.561122 271403 out.go:177] * Configuring Calico (Container Networking Interface) ...
I0317 11:02:44.562673 271403 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
I0317 11:02:44.562695 271403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (324369 bytes)
I0317 11:02:44.581949 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0317 11:02:44.036780 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:46.036815 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:42.649172 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:44.649482 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:47.148798 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:45.843315 271403 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.261329329s)
I0317 11:02:45.843361 271403 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0317 11:02:45.843456 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 11:02:45.843478 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-236437 minikube.k8s.io/updated_at=2025_03_17T11_02_45_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=calico-236437 minikube.k8s.io/primary=true
I0317 11:02:45.850707 271403 ops.go:34] apiserver oom_adj: -16
I0317 11:02:45.948147 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 11:02:46.448502 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 11:02:46.949084 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 11:02:47.449157 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 11:02:47.948285 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 11:02:48.448265 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 11:02:48.949124 271403 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 11:02:49.015125 271403 kubeadm.go:1113] duration metric: took 3.171736497s to wait for elevateKubeSystemPrivileges
I0317 11:02:49.015169 271403 kubeadm.go:394] duration metric: took 14.338970216s to StartCluster
I0317 11:02:49.015191 271403 settings.go:142] acquiring lock: {Name:mk2a57d556efff40ccd4336229d7a78216b861f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:49.015295 271403 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20535-4918/kubeconfig
I0317 11:02:49.016764 271403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-4918/kubeconfig: {Name:mk686b9f6159ab958672b945ae0aa5a9c96e9ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 11:02:49.017020 271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0317 11:02:49.017025 271403 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0317 11:02:49.017094 271403 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0317 11:02:49.017190 271403 addons.go:69] Setting storage-provisioner=true in profile "calico-236437"
I0317 11:02:49.017214 271403 addons.go:238] Setting addon storage-provisioner=true in "calico-236437"
I0317 11:02:49.017235 271403 addons.go:69] Setting default-storageclass=true in profile "calico-236437"
I0317 11:02:49.017249 271403 config.go:182] Loaded profile config "calico-236437": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 11:02:49.017263 271403 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-236437"
I0317 11:02:49.017336 271403 host.go:66] Checking if "calico-236437" exists ...
I0317 11:02:49.017645 271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
I0317 11:02:49.017831 271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
I0317 11:02:49.018669 271403 out.go:177] * Verifying Kubernetes components...
I0317 11:02:49.019970 271403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 11:02:49.043863 271403 addons.go:238] Setting addon default-storageclass=true in "calico-236437"
I0317 11:02:49.043916 271403 host.go:66] Checking if "calico-236437" exists ...
I0317 11:02:49.044307 271403 cli_runner.go:164] Run: docker container inspect calico-236437 --format={{.State.Status}}
I0317 11:02:49.044516 271403 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0317 11:02:45.035232 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:47.533353 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:49.045642 271403 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0317 11:02:49.045662 271403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0317 11:02:49.045707 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:49.074641 271403 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0317 11:02:49.074679 271403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0317 11:02:49.074683 271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
I0317 11:02:49.074750 271403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-236437
I0317 11:02:49.092825 271403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20535-4918/.minikube/machines/calico-236437/id_rsa Username:docker}
I0317 11:02:49.146609 271403 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0317 11:02:49.146645 271403 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0317 11:02:49.231557 271403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0317 11:02:49.512613 271403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0317 11:02:49.840101 271403 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
I0317 11:02:49.841256 271403 node_ready.go:35] waiting up to 15m0s for node "calico-236437" to be "Ready" ...
I0317 11:02:49.904604 271403 node_ready.go:49] node "calico-236437" has status "Ready":"True"
I0317 11:02:49.904627 271403 node_ready.go:38] duration metric: took 63.34338ms for node "calico-236437" to be "Ready" ...
I0317 11:02:49.904637 271403 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0317 11:02:49.907969 271403 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace to be "Ready" ...
I0317 11:02:50.110000 271403 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0317 11:02:48.037463 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:50.037520 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:49.149631 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:51.648685 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:49.534616 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:52.034129 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:50.111234 271403 addons.go:514] duration metric: took 1.094138366s for enable addons: enabled=[storage-provisioner default-storageclass]
I0317 11:02:50.344618 271403 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-236437" context rescaled to 1 replicas
I0317 11:02:51.912894 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:53.913540 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:52.537453 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:55.036479 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:54.148382 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:56.648833 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:54.533694 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:56.533724 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:58.534453 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:56.413348 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:58.912802 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:57.037090 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:59.538524 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:02:59.147863 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:01.148827 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:01.033848 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:03.033885 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:00.913484 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:03.413309 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:02.037409 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:04.537527 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:03.648469 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:06.148443 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:05.533183 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:07.534316 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:05.912288 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:07.913513 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:07.037320 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:09.037403 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:11.537099 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:08.148993 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:10.149164 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:10.034575 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:12.534405 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:10.413225 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:12.912794 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:13.537150 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:16.036722 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:12.648921 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:15.148704 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:15.033258 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:17.034293 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:14.913329 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:17.412933 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:18.037375 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:20.536741 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:17.649237 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:20.148773 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:19.533985 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:22.033205 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:24.033479 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:19.912177 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:21.913651 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:24.413065 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:22.537000 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:25.036714 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:22.648948 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:25.148737 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:26.534711 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:29.032989 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:26.413616 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:28.913818 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:27.037167 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:29.537027 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:31.537071 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:27.648894 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:30.148407 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:32.149154 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:31.034371 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:33.533651 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:31.412984 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:33.413031 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:33.537243 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:36.036866 245681 pod_ready.go:103] pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:34.648908 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:37.149211 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:35.534459 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:38.034643 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:35.420991 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:37.913715 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:37.537340 245681 pod_ready.go:82] duration metric: took 4m0.005543433s for pod "coredns-668d6bf9bc-c7scj" in "kube-system" namespace to be "Ready" ...
E0317 11:03:37.537353 245681 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0317 11:03:37.537374 245681 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-507725" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.540817 245681 pod_ready.go:93] pod "etcd-pause-507725" in "kube-system" namespace has status "Ready":"True"
I0317 11:03:37.540828 245681 pod_ready.go:82] duration metric: took 3.446936ms for pod "etcd-pause-507725" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.540841 245681 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-507725" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.544051 245681 pod_ready.go:93] pod "kube-apiserver-pause-507725" in "kube-system" namespace has status "Ready":"True"
I0317 11:03:37.544059 245681 pod_ready.go:82] duration metric: took 3.212331ms for pod "kube-apiserver-pause-507725" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.544066 245681 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-507725" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.547376 245681 pod_ready.go:93] pod "kube-controller-manager-pause-507725" in "kube-system" namespace has status "Ready":"True"
I0317 11:03:37.547385 245681 pod_ready.go:82] duration metric: took 3.313908ms for pod "kube-controller-manager-pause-507725" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.547394 245681 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lmh8d" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.550390 245681 pod_ready.go:93] pod "kube-proxy-lmh8d" in "kube-system" namespace has status "Ready":"True"
I0317 11:03:37.550397 245681 pod_ready.go:82] duration metric: took 2.998178ms for pod "kube-proxy-lmh8d" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.550402 245681 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-507725" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.935598 245681 pod_ready.go:93] pod "kube-scheduler-pause-507725" in "kube-system" namespace has status "Ready":"True"
I0317 11:03:37.935609 245681 pod_ready.go:82] duration metric: took 385.202448ms for pod "kube-scheduler-pause-507725" in "kube-system" namespace to be "Ready" ...
I0317 11:03:37.935615 245681 pod_ready.go:39] duration metric: took 4m2.410016367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0317 11:03:37.935635 245681 api_server.go:52] waiting for apiserver process to appear ...
I0317 11:03:37.935665 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:03:37.935716 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:03:37.970327 245681 cri.go:89] found id: "d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
I0317 11:03:37.970343 245681 cri.go:89] found id: ""
I0317 11:03:37.970351 245681 logs.go:282] 1 containers: [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01]
I0317 11:03:37.970412 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:37.974106 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:03:37.974150 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:03:38.007043 245681 cri.go:89] found id: "5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
I0317 11:03:38.007057 245681 cri.go:89] found id: ""
I0317 11:03:38.007071 245681 logs.go:282] 1 containers: [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2]
I0317 11:03:38.007112 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:38.010476 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:03:38.010521 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:03:38.043432 245681 cri.go:89] found id: ""
I0317 11:03:38.043447 245681 logs.go:282] 0 containers: []
W0317 11:03:38.043455 245681 logs.go:284] No container was found matching "coredns"
I0317 11:03:38.043460 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:03:38.043513 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:03:38.076008 245681 cri.go:89] found id: "d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
I0317 11:03:38.076021 245681 cri.go:89] found id: ""
I0317 11:03:38.076027 245681 logs.go:282] 1 containers: [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373]
I0317 11:03:38.076071 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:38.079322 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:03:38.079381 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:03:38.111550 245681 cri.go:89] found id: "491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
I0317 11:03:38.111565 245681 cri.go:89] found id: ""
I0317 11:03:38.111573 245681 logs.go:282] 1 containers: [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c]
I0317 11:03:38.111619 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:38.114859 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:03:38.114913 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:03:38.147851 245681 cri.go:89] found id: "80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
I0317 11:03:38.147866 245681 cri.go:89] found id: ""
I0317 11:03:38.147874 245681 logs.go:282] 1 containers: [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718]
I0317 11:03:38.147928 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:38.151478 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:03:38.151520 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:03:38.182883 245681 cri.go:89] found id: ""
I0317 11:03:38.182896 245681 logs.go:282] 0 containers: []
W0317 11:03:38.182902 245681 logs.go:284] No container was found matching "kindnet"
I0317 11:03:38.182913 245681 logs.go:123] Gathering logs for kube-apiserver [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01] ...
I0317 11:03:38.182923 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
I0317 11:03:38.224826 245681 logs.go:123] Gathering logs for kube-scheduler [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373] ...
I0317 11:03:38.224845 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
I0317 11:03:38.268744 245681 logs.go:123] Gathering logs for kube-controller-manager [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718] ...
I0317 11:03:38.268764 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
I0317 11:03:38.316908 245681 logs.go:123] Gathering logs for containerd ...
I0317 11:03:38.316926 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:03:38.362274 245681 logs.go:123] Gathering logs for kubelet ...
I0317 11:03:38.362294 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:03:38.458593 245681 logs.go:123] Gathering logs for etcd [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2] ...
I0317 11:03:38.458613 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
I0317 11:03:38.498455 245681 logs.go:123] Gathering logs for kube-proxy [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c] ...
I0317 11:03:38.498475 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
I0317 11:03:38.533550 245681 logs.go:123] Gathering logs for container status ...
I0317 11:03:38.533573 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:03:38.569617 245681 logs.go:123] Gathering logs for dmesg ...
I0317 11:03:38.569637 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:03:38.587868 245681 logs.go:123] Gathering logs for describe nodes ...
I0317 11:03:38.587884 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:03:41.171361 245681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0317 11:03:41.182605 245681 api_server.go:72] duration metric: took 4m6.106338016s to wait for apiserver process to appear ...
I0317 11:03:41.182618 245681 api_server.go:88] waiting for apiserver healthz status ...
I0317 11:03:41.182644 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:03:41.182681 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:03:41.215179 245681 cri.go:89] found id: "d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
I0317 11:03:41.215193 245681 cri.go:89] found id: ""
I0317 11:03:41.215200 245681 logs.go:282] 1 containers: [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01]
I0317 11:03:41.215339 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:41.218687 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:03:41.218741 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:03:41.250752 245681 cri.go:89] found id: "5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
I0317 11:03:41.250768 245681 cri.go:89] found id: ""
I0317 11:03:41.250775 245681 logs.go:282] 1 containers: [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2]
I0317 11:03:41.250826 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:41.254355 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:03:41.254410 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:03:41.287190 245681 cri.go:89] found id: ""
I0317 11:03:41.287208 245681 logs.go:282] 0 containers: []
W0317 11:03:41.287218 245681 logs.go:284] No container was found matching "coredns"
I0317 11:03:41.287225 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:03:41.287329 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:03:41.320268 245681 cri.go:89] found id: "d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
I0317 11:03:41.320283 245681 cri.go:89] found id: ""
I0317 11:03:41.320293 245681 logs.go:282] 1 containers: [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373]
I0317 11:03:41.320351 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:41.323878 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:03:41.323935 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:03:41.355657 245681 cri.go:89] found id: "491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
I0317 11:03:41.355668 245681 cri.go:89] found id: ""
I0317 11:03:41.355674 245681 logs.go:282] 1 containers: [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c]
I0317 11:03:41.355714 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:41.358944 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:03:41.359001 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:03:41.391119 245681 cri.go:89] found id: "80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
I0317 11:03:41.391133 245681 cri.go:89] found id: ""
I0317 11:03:41.391141 245681 logs.go:282] 1 containers: [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718]
I0317 11:03:41.391188 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:41.394575 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:03:41.394626 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:03:41.428649 245681 cri.go:89] found id: ""
I0317 11:03:41.428661 245681 logs.go:282] 0 containers: []
W0317 11:03:41.428667 245681 logs.go:284] No container was found matching "kindnet"
I0317 11:03:41.428677 245681 logs.go:123] Gathering logs for describe nodes ...
I0317 11:03:41.428688 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:03:41.512438 245681 logs.go:123] Gathering logs for etcd [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2] ...
I0317 11:03:41.512458 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
I0317 11:03:41.552239 245681 logs.go:123] Gathering logs for kube-proxy [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c] ...
I0317 11:03:41.552256 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
I0317 11:03:41.586185 245681 logs.go:123] Gathering logs for containerd ...
I0317 11:03:41.586200 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:03:41.628142 245681 logs.go:123] Gathering logs for dmesg ...
I0317 11:03:41.628159 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:03:41.646089 245681 logs.go:123] Gathering logs for kube-apiserver [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01] ...
I0317 11:03:41.646106 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
I0317 11:03:41.685793 245681 logs.go:123] Gathering logs for kube-scheduler [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373] ...
I0317 11:03:41.685809 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
I0317 11:03:41.728438 245681 logs.go:123] Gathering logs for kube-controller-manager [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718] ...
I0317 11:03:41.728455 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
I0317 11:03:41.773094 245681 logs.go:123] Gathering logs for container status ...
I0317 11:03:41.773111 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:03:39.149731 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:41.648838 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:40.533971 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:42.534469 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:40.412687 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:42.413498 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:41.807752 245681 logs.go:123] Gathering logs for kubelet ...
I0317 11:03:41.807768 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:03:44.399144 245681 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
I0317 11:03:44.402976 245681 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
ok
I0317 11:03:44.404151 245681 api_server.go:141] control plane version: v1.32.2
I0317 11:03:44.404169 245681 api_server.go:131] duration metric: took 3.221544822s to wait for apiserver health ...
I0317 11:03:44.404178 245681 system_pods.go:43] waiting for kube-system pods to appear ...
I0317 11:03:44.404201 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:03:44.404249 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:03:44.437034 245681 cri.go:89] found id: "d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
I0317 11:03:44.437050 245681 cri.go:89] found id: ""
I0317 11:03:44.437056 245681 logs.go:282] 1 containers: [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01]
I0317 11:03:44.437103 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:44.440667 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:03:44.440724 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:03:44.474416 245681 cri.go:89] found id: "5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
I0317 11:03:44.474427 245681 cri.go:89] found id: ""
I0317 11:03:44.474433 245681 logs.go:282] 1 containers: [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2]
I0317 11:03:44.474491 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:44.478025 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:03:44.478078 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:03:44.510846 245681 cri.go:89] found id: ""
I0317 11:03:44.510862 245681 logs.go:282] 0 containers: []
W0317 11:03:44.510868 245681 logs.go:284] No container was found matching "coredns"
I0317 11:03:44.510873 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:03:44.510916 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:03:44.545105 245681 cri.go:89] found id: "d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
I0317 11:03:44.545116 245681 cri.go:89] found id: ""
I0317 11:03:44.545121 245681 logs.go:282] 1 containers: [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373]
I0317 11:03:44.545164 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:44.548666 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:03:44.548712 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:03:44.580776 245681 cri.go:89] found id: "491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
I0317 11:03:44.580793 245681 cri.go:89] found id: ""
I0317 11:03:44.580844 245681 logs.go:282] 1 containers: [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c]
I0317 11:03:44.580891 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:44.584414 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:03:44.584460 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:03:44.616416 245681 cri.go:89] found id: "80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
I0317 11:03:44.616430 245681 cri.go:89] found id: ""
I0317 11:03:44.616438 245681 logs.go:282] 1 containers: [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718]
I0317 11:03:44.616488 245681 ssh_runner.go:195] Run: which crictl
I0317 11:03:44.619818 245681 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:03:44.619870 245681 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:03:44.653683 245681 cri.go:89] found id: ""
I0317 11:03:44.653695 245681 logs.go:282] 0 containers: []
W0317 11:03:44.653702 245681 logs.go:284] No container was found matching "kindnet"
I0317 11:03:44.653713 245681 logs.go:123] Gathering logs for kube-proxy [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c] ...
I0317 11:03:44.653723 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c"
I0317 11:03:44.688280 245681 logs.go:123] Gathering logs for kube-controller-manager [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718] ...
I0317 11:03:44.688295 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718"
I0317 11:03:44.737319 245681 logs.go:123] Gathering logs for dmesg ...
I0317 11:03:44.737337 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:03:44.756391 245681 logs.go:123] Gathering logs for kube-apiserver [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01] ...
I0317 11:03:44.756405 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01"
I0317 11:03:44.795981 245681 logs.go:123] Gathering logs for containerd ...
I0317 11:03:44.796001 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:03:44.838624 245681 logs.go:123] Gathering logs for container status ...
I0317 11:03:44.838641 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:03:44.875163 245681 logs.go:123] Gathering logs for kubelet ...
I0317 11:03:44.875189 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:03:44.964409 245681 logs.go:123] Gathering logs for describe nodes ...
I0317 11:03:44.964429 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:03:45.046002 245681 logs.go:123] Gathering logs for etcd [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2] ...
I0317 11:03:45.046017 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2"
I0317 11:03:45.084825 245681 logs.go:123] Gathering logs for kube-scheduler [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373] ...
I0317 11:03:45.084842 245681 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373"
I0317 11:03:44.148415 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:46.148535 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:45.033698 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:47.533224 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:44.912999 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:47.412832 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:49.413266 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:47.629139 245681 system_pods.go:59] 7 kube-system pods found
I0317 11:03:47.629170 245681 system_pods.go:61] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:47.629175 245681 system_pods.go:61] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:47.629184 245681 system_pods.go:61] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:47.629187 245681 system_pods.go:61] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:47.629190 245681 system_pods.go:61] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:47.629193 245681 system_pods.go:61] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:47.629195 245681 system_pods.go:61] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:47.629200 245681 system_pods.go:74] duration metric: took 3.225017966s to wait for pod list to return data ...
I0317 11:03:47.629206 245681 default_sa.go:34] waiting for default service account to be created ...
I0317 11:03:47.631444 245681 default_sa.go:45] found service account: "default"
I0317 11:03:47.631456 245681 default_sa.go:55] duration metric: took 2.245448ms for default service account to be created ...
I0317 11:03:47.631462 245681 system_pods.go:116] waiting for k8s-apps to be running ...
I0317 11:03:47.633680 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:47.633694 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:47.633698 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:47.633703 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:47.633707 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:47.633710 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:47.633713 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:47.633715 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:47.633740 245681 retry.go:31] will retry after 208.624093ms: missing components: kube-dns
I0317 11:03:47.845983 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:47.846001 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:47.846005 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:47.846011 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:47.846014 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:47.846017 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:47.846020 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:47.846022 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:47.846034 245681 retry.go:31] will retry after 322.393506ms: missing components: kube-dns
I0317 11:03:48.172551 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:48.172572 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:48.172576 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:48.172582 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:48.172585 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:48.172589 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:48.172592 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:48.172596 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:48.172607 245681 retry.go:31] will retry after 329.587841ms: missing components: kube-dns
I0317 11:03:48.507513 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:48.507529 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:48.507534 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:48.507541 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:48.507545 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:48.507548 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:48.507551 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:48.507553 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:48.507564 245681 retry.go:31] will retry after 486.130076ms: missing components: kube-dns
I0317 11:03:48.996755 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:48.996784 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:48.996788 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:48.996795 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:48.996798 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:48.996801 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:48.996803 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:48.996808 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:48.996821 245681 retry.go:31] will retry after 594.939063ms: missing components: kube-dns
I0317 11:03:49.595554 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:49.595573 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:49.595577 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:49.595583 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:49.595586 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:49.595589 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:49.595592 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:49.595594 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:49.595605 245681 retry.go:31] will retry after 584.315761ms: missing components: kube-dns
I0317 11:03:50.183549 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:50.183577 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:50.183581 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:50.183587 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:50.183590 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:50.183593 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:50.183595 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:50.183597 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:50.183611 245681 retry.go:31] will retry after 818.942859ms: missing components: kube-dns
I0317 11:03:51.006535 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:51.006552 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:51.006556 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:51.006562 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:51.006565 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:51.006568 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:51.006570 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:51.006572 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:51.006583 245681 retry.go:31] will retry after 1.023904266s: missing components: kube-dns
I0317 11:03:48.148792 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:50.649053 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:49.533914 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:52.033719 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:51.913217 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:53.913804 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:52.034391 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:52.034407 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:52.034411 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:52.034418 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:52.034423 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:52.034426 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:52.034430 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:52.034432 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:52.034443 245681 retry.go:31] will retry after 1.438418964s: missing components: kube-dns
I0317 11:03:53.477096 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:53.477115 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:53.477119 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:53.477125 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:53.477128 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:53.477131 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:53.477133 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:53.477136 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:53.477147 245681 retry.go:31] will retry after 1.706517056s: missing components: kube-dns
I0317 11:03:55.187542 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:55.187561 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:55.187567 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:55.187574 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:55.187577 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:55.187580 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:55.187582 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:55.187584 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:55.187596 245681 retry.go:31] will retry after 2.016724605s: missing components: kube-dns
I0317 11:03:52.649095 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:55.148710 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:57.149810 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:54.533660 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:57.034175 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:56.413532 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:58.913374 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:57.209012 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:03:57.209030 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:03:57.209034 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:03:57.209040 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:03:57.209043 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:03:57.209046 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:03:57.209049 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:03:57.209051 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:03:57.209070 245681 retry.go:31] will retry after 2.863078821s: missing components: kube-dns
I0317 11:04:00.077082 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:04:00.077102 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:00.077106 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:04:00.077112 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:00.077116 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:04:00.077119 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:04:00.077121 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:04:00.077123 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:04:00.077136 245681 retry.go:31] will retry after 3.357048609s: missing components: kube-dns
I0317 11:03:59.648763 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:02.148762 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:03:59.533729 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:01.534202 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:03.536116 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:01.413655 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:03.413742 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:03.438019 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:04:03.438039 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:03.438044 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:04:03.438049 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:03.438053 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:04:03.438056 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:04:03.438060 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:04:03.438062 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:04:03.438075 245681 retry.go:31] will retry after 4.751945119s: missing components: kube-dns
I0317 11:04:04.648885 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:06.649127 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:06.033695 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:08.534321 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:05.913392 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:08.412709 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:08.194256 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:04:08.194274 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:08.194278 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:04:08.194286 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:08.194289 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:04:08.194292 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:04:08.194294 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:04:08.194296 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:04:08.194307 245681 retry.go:31] will retry after 4.655703533s: missing components: kube-dns
I0317 11:04:09.148372 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:11.149273 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:11.033836 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:13.034472 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:10.412969 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:12.413781 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:12.853750 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:04:12.853770 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:12.853776 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:04:12.853784 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:12.853788 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:04:12.853792 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:04:12.853796 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:04:12.853799 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:04:12.853813 245681 retry.go:31] will retry after 6.617216886s: missing components: kube-dns
I0317 11:04:13.648760 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:16.150790 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:15.533314 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:17.533852 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:14.913146 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:17.413191 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:19.414688 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:19.474873 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:04:19.474894 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:19.474898 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:04:19.474904 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:19.474907 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:04:19.474913 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:04:19.474915 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:04:19.474917 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:04:19.474931 245681 retry.go:31] will retry after 7.39578455s: missing components: kube-dns
I0317 11:04:18.648614 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:20.649479 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:20.033768 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:22.534210 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:21.913011 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:23.913212 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:23.148451 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:25.149265 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:25.033686 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:27.534095 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:25.913387 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:27.913564 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:26.874163 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:04:26.874182 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:26.874187 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:04:26.874195 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:26.874198 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:04:26.874201 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:04:26.874204 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:04:26.874206 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:04:26.874219 245681 retry.go:31] will retry after 12.601914902s: missing components: kube-dns
I0317 11:04:27.648526 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:29.649214 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:31.649666 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:30.033770 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:32.533597 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:30.412714 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:32.413230 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:34.413482 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:34.148783 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:36.148832 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:34.533976 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:37.033284 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:39.033794 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:36.912932 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:38.913417 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:39.480299 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:04:39.480316 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:39.480320 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:04:39.480326 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:39.480329 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:04:39.480331 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:04:39.480334 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:04:39.480336 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:04:39.480349 245681 retry.go:31] will retry after 16.356369315s: missing components: kube-dns
I0317 11:04:38.648736 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:40.648879 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:41.034493 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:43.533541 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:40.914920 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:43.412457 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:43.148696 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:45.648517 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:45.534005 255203 pod_ready.go:103] pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:45.534028 255203 pod_ready.go:82] duration metric: took 4m0.005195322s for pod "coredns-668d6bf9bc-rl5k6" in "kube-system" namespace to be "Ready" ...
E0317 11:04:45.534037 255203 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0317 11:04:45.534043 255203 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.537073 255203 pod_ready.go:93] pod "etcd-auto-236437" in "kube-system" namespace has status "Ready":"True"
I0317 11:04:45.537096 255203 pod_ready.go:82] duration metric: took 3.045951ms for pod "etcd-auto-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.537110 255203 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.540213 255203 pod_ready.go:93] pod "kube-apiserver-auto-236437" in "kube-system" namespace has status "Ready":"True"
I0317 11:04:45.540229 255203 pod_ready.go:82] duration metric: took 3.112401ms for pod "kube-apiserver-auto-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.540238 255203 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.543301 255203 pod_ready.go:93] pod "kube-controller-manager-auto-236437" in "kube-system" namespace has status "Ready":"True"
I0317 11:04:45.543315 255203 pod_ready.go:82] duration metric: took 3.071405ms for pod "kube-controller-manager-auto-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.543323 255203 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-jcdsz" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.546487 255203 pod_ready.go:93] pod "kube-proxy-jcdsz" in "kube-system" namespace has status "Ready":"True"
I0317 11:04:45.546501 255203 pod_ready.go:82] duration metric: took 3.173334ms for pod "kube-proxy-jcdsz" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.546507 255203 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.932558 255203 pod_ready.go:93] pod "kube-scheduler-auto-236437" in "kube-system" namespace has status "Ready":"True"
I0317 11:04:45.932579 255203 pod_ready.go:82] duration metric: took 386.066634ms for pod "kube-scheduler-auto-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:04:45.932587 255203 pod_ready.go:39] duration metric: took 4m2.409980263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0317 11:04:45.932604 255203 api_server.go:52] waiting for apiserver process to appear ...
I0317 11:04:45.932640 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:04:45.932697 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:04:45.965778 255203 cri.go:89] found id: "079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
I0317 11:04:45.965803 255203 cri.go:89] found id: ""
I0317 11:04:45.965811 255203 logs.go:282] 1 containers: [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a]
I0317 11:04:45.965866 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:45.969834 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:04:45.969906 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:04:46.001786 255203 cri.go:89] found id: "1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
I0317 11:04:46.001809 255203 cri.go:89] found id: ""
I0317 11:04:46.001817 255203 logs.go:282] 1 containers: [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc]
I0317 11:04:46.001882 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:46.005480 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:04:46.005540 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:04:46.036917 255203 cri.go:89] found id: ""
I0317 11:04:46.036949 255203 logs.go:282] 0 containers: []
W0317 11:04:46.036959 255203 logs.go:284] No container was found matching "coredns"
I0317 11:04:46.036966 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:04:46.037030 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:04:46.070471 255203 cri.go:89] found id: "a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
I0317 11:04:46.070495 255203 cri.go:89] found id: ""
I0317 11:04:46.070502 255203 logs.go:282] 1 containers: [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b]
I0317 11:04:46.070548 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:46.073947 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:04:46.074013 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:04:46.105813 255203 cri.go:89] found id: "a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
I0317 11:04:46.105851 255203 cri.go:89] found id: ""
I0317 11:04:46.105858 255203 logs.go:282] 1 containers: [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e]
I0317 11:04:46.105906 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:46.109214 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:04:46.109274 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:04:46.141415 255203 cri.go:89] found id: "00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
I0317 11:04:46.141437 255203 cri.go:89] found id: ""
I0317 11:04:46.141446 255203 logs.go:282] 1 containers: [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f]
I0317 11:04:46.141505 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:46.145603 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:04:46.145667 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:04:46.181315 255203 cri.go:89] found id: ""
I0317 11:04:46.181339 255203 logs.go:282] 0 containers: []
W0317 11:04:46.181348 255203 logs.go:284] No container was found matching "kindnet"
I0317 11:04:46.181365 255203 logs.go:123] Gathering logs for dmesg ...
I0317 11:04:46.181379 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:04:46.199524 255203 logs.go:123] Gathering logs for describe nodes ...
I0317 11:04:46.199555 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:04:46.284323 255203 logs.go:123] Gathering logs for kube-apiserver [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a] ...
I0317 11:04:46.284351 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
I0317 11:04:46.324591 255203 logs.go:123] Gathering logs for etcd [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc] ...
I0317 11:04:46.324619 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
I0317 11:04:46.361651 255203 logs.go:123] Gathering logs for kube-scheduler [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b] ...
I0317 11:04:46.361679 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
I0317 11:04:46.401009 255203 logs.go:123] Gathering logs for kube-proxy [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e] ...
I0317 11:04:46.401039 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
I0317 11:04:46.434852 255203 logs.go:123] Gathering logs for kube-controller-manager [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f] ...
I0317 11:04:46.434882 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
I0317 11:04:46.482469 255203 logs.go:123] Gathering logs for container status ...
I0317 11:04:46.482498 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:04:46.518409 255203 logs.go:123] Gathering logs for kubelet ...
I0317 11:04:46.518439 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:04:46.610561 255203 logs.go:123] Gathering logs for containerd ...
I0317 11:04:46.610595 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:04:49.156457 255203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0317 11:04:49.167188 255203 api_server.go:72] duration metric: took 4m6.341889458s to wait for apiserver process to appear ...
I0317 11:04:49.167208 255203 api_server.go:88] waiting for apiserver healthz status ...
I0317 11:04:49.167234 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:04:49.167301 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:04:49.198198 255203 cri.go:89] found id: "079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
I0317 11:04:49.198227 255203 cri.go:89] found id: ""
I0317 11:04:49.198237 255203 logs.go:282] 1 containers: [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a]
I0317 11:04:49.198301 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:49.201745 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:04:49.201804 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:04:49.236410 255203 cri.go:89] found id: "1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
I0317 11:04:49.236433 255203 cri.go:89] found id: ""
I0317 11:04:49.236442 255203 logs.go:282] 1 containers: [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc]
I0317 11:04:49.236497 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:49.240071 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:04:49.240149 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:04:49.273265 255203 cri.go:89] found id: ""
I0317 11:04:49.273292 255203 logs.go:282] 0 containers: []
W0317 11:04:49.273303 255203 logs.go:284] No container was found matching "coredns"
I0317 11:04:49.273310 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:04:49.273378 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:04:49.304655 255203 cri.go:89] found id: "a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
I0317 11:04:49.304679 255203 cri.go:89] found id: ""
I0317 11:04:49.304689 255203 logs.go:282] 1 containers: [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b]
I0317 11:04:49.304736 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:49.308178 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:04:49.308234 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:04:49.339992 255203 cri.go:89] found id: "a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
I0317 11:04:49.340017 255203 cri.go:89] found id: ""
I0317 11:04:49.340026 255203 logs.go:282] 1 containers: [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e]
I0317 11:04:49.340083 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:49.343381 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:04:49.343446 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:04:49.375770 255203 cri.go:89] found id: "00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
I0317 11:04:49.375791 255203 cri.go:89] found id: ""
I0317 11:04:49.375800 255203 logs.go:282] 1 containers: [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f]
I0317 11:04:49.375865 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:49.379083 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:04:49.379144 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:04:49.412093 255203 cri.go:89] found id: ""
I0317 11:04:49.412119 255203 logs.go:282] 0 containers: []
W0317 11:04:49.412130 255203 logs.go:284] No container was found matching "kindnet"
I0317 11:04:49.412144 255203 logs.go:123] Gathering logs for dmesg ...
I0317 11:04:49.412161 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:04:49.430275 255203 logs.go:123] Gathering logs for kube-apiserver [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a] ...
I0317 11:04:49.430306 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
I0317 11:04:49.469418 255203 logs.go:123] Gathering logs for kube-scheduler [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b] ...
I0317 11:04:49.469446 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
I0317 11:04:45.412555 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:47.912953 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:47.648598 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:49.649521 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:52.148897 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:49.509906 255203 logs.go:123] Gathering logs for kube-proxy [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e] ...
I0317 11:04:49.509936 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
I0317 11:04:49.543381 255203 logs.go:123] Gathering logs for container status ...
I0317 11:04:49.543409 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:04:49.578445 255203 logs.go:123] Gathering logs for kubelet ...
I0317 11:04:49.578472 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:04:49.671453 255203 logs.go:123] Gathering logs for describe nodes ...
I0317 11:04:49.671484 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:04:49.752296 255203 logs.go:123] Gathering logs for etcd [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc] ...
I0317 11:04:49.752327 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
I0317 11:04:49.789145 255203 logs.go:123] Gathering logs for kube-controller-manager [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f] ...
I0317 11:04:49.789175 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
I0317 11:04:49.833437 255203 logs.go:123] Gathering logs for containerd ...
I0317 11:04:49.833478 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:04:52.376428 255203 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
I0317 11:04:52.380160 255203 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
ok
I0317 11:04:52.381121 255203 api_server.go:141] control plane version: v1.32.2
I0317 11:04:52.381147 255203 api_server.go:131] duration metric: took 3.213930735s to wait for apiserver health ...
I0317 11:04:52.381154 255203 system_pods.go:43] waiting for kube-system pods to appear ...
I0317 11:04:52.381173 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:04:52.381222 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:04:52.415960 255203 cri.go:89] found id: "079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
I0317 11:04:52.415982 255203 cri.go:89] found id: ""
I0317 11:04:52.415991 255203 logs.go:282] 1 containers: [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a]
I0317 11:04:52.416048 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:52.419718 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:04:52.419772 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:04:52.454820 255203 cri.go:89] found id: "1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
I0317 11:04:52.454908 255203 cri.go:89] found id: ""
I0317 11:04:52.454923 255203 logs.go:282] 1 containers: [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc]
I0317 11:04:52.454991 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:52.459020 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:04:52.459085 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:04:52.491803 255203 cri.go:89] found id: ""
I0317 11:04:52.491834 255203 logs.go:282] 0 containers: []
W0317 11:04:52.491843 255203 logs.go:284] No container was found matching "coredns"
I0317 11:04:52.491849 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:04:52.491903 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:04:52.526170 255203 cri.go:89] found id: "a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
I0317 11:04:52.526199 255203 cri.go:89] found id: ""
I0317 11:04:52.526209 255203 logs.go:282] 1 containers: [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b]
I0317 11:04:52.526272 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:52.529827 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:04:52.529903 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:04:52.562281 255203 cri.go:89] found id: "a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
I0317 11:04:52.562311 255203 cri.go:89] found id: ""
I0317 11:04:52.562320 255203 logs.go:282] 1 containers: [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e]
I0317 11:04:52.562383 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:52.565941 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:04:52.566001 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:04:52.598944 255203 cri.go:89] found id: "00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
I0317 11:04:52.598971 255203 cri.go:89] found id: ""
I0317 11:04:52.598982 255203 logs.go:282] 1 containers: [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f]
I0317 11:04:52.599044 255203 ssh_runner.go:195] Run: which crictl
I0317 11:04:52.602585 255203 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:04:52.602649 255203 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:04:52.635595 255203 cri.go:89] found id: ""
I0317 11:04:52.635617 255203 logs.go:282] 0 containers: []
W0317 11:04:52.635626 255203 logs.go:284] No container was found matching "kindnet"
I0317 11:04:52.635638 255203 logs.go:123] Gathering logs for describe nodes ...
I0317 11:04:52.635653 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:04:52.721412 255203 logs.go:123] Gathering logs for etcd [1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc] ...
I0317 11:04:52.721442 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1737dad0d323c2b8151bb669aabe69c07d337ceb8c43bb4828b829bbab2343dc"
I0317 11:04:52.761651 255203 logs.go:123] Gathering logs for kube-scheduler [a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b] ...
I0317 11:04:52.761685 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0e03a67ab1fa4718ee53cea3f6e0bc1bb627e139e878806dc83a93fed5c145b"
I0317 11:04:52.801775 255203 logs.go:123] Gathering logs for kube-controller-manager [00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f] ...
I0317 11:04:52.801810 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00972b32cd281e44ed6e08e92fb75e21d3035779b9146408343cf85df268fa5f"
I0317 11:04:52.848366 255203 logs.go:123] Gathering logs for containerd ...
I0317 11:04:52.848401 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:04:52.891075 255203 logs.go:123] Gathering logs for container status ...
I0317 11:04:52.891112 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:04:52.954106 255203 logs.go:123] Gathering logs for kube-apiserver [079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a] ...
I0317 11:04:52.954142 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079ff1249beced31f587df0504e29a582fff0e4f4dc6c9c775df6f99020c493a"
I0317 11:04:52.995653 255203 logs.go:123] Gathering logs for kube-proxy [a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e] ...
I0317 11:04:52.995685 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1c731d01eb5e4ad96f3942d849fcde14f52c9de213c4c33500e93505e3d2b2e"
I0317 11:04:53.032179 255203 logs.go:123] Gathering logs for kubelet ...
I0317 11:04:53.032210 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:04:53.124349 255203 logs.go:123] Gathering logs for dmesg ...
I0317 11:04:53.124385 255203 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:04:49.913272 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:52.413818 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:55.841385 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:04:55.841404 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:55.841409 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:04:55.841416 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:55.841419 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:04:55.841421 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:04:55.841424 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:04:55.841426 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:04:55.841437 245681 retry.go:31] will retry after 19.064243371s: missing components: kube-dns
I0317 11:04:54.149147 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:56.149457 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:55.650091 255203 system_pods.go:59] 8 kube-system pods found
I0317 11:04:55.650133 255203 system_pods.go:61] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:55.650142 255203 system_pods.go:61] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:55.650153 255203 system_pods.go:61] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:55.650160 255203 system_pods.go:61] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:55.650166 255203 system_pods.go:61] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:55.650171 255203 system_pods.go:61] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:55.650177 255203 system_pods.go:61] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:55.650182 255203 system_pods.go:61] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:55.650191 255203 system_pods.go:74] duration metric: took 3.269030261s to wait for pod list to return data ...
I0317 11:04:55.650201 255203 default_sa.go:34] waiting for default service account to be created ...
I0317 11:04:55.652891 255203 default_sa.go:45] found service account: "default"
I0317 11:04:55.652914 255203 default_sa.go:55] duration metric: took 2.706728ms for default service account to be created ...
I0317 11:04:55.652921 255203 system_pods.go:116] waiting for k8s-apps to be running ...
I0317 11:04:55.655394 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:04:55.655429 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:55.655437 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:55.655447 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:55.655452 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:55.655460 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:55.655464 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:55.655467 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:55.655473 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:55.655494 255203 retry.go:31] will retry after 201.27772ms: missing components: kube-dns
I0317 11:04:55.861044 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:04:55.861077 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:55.861085 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:55.861094 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:55.861098 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:55.861103 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:55.861106 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:55.861109 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:55.861112 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:55.861126 255203 retry.go:31] will retry after 312.286943ms: missing components: kube-dns
I0317 11:04:56.176707 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:04:56.176740 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:56.176746 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:56.176754 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:56.176758 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:56.176762 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:56.176765 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:56.176768 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:56.176771 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:56.176786 255203 retry.go:31] will retry after 421.052014ms: missing components: kube-dns
I0317 11:04:56.602089 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:04:56.602121 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:56.602126 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:56.602134 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:56.602138 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:56.602142 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:56.602145 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:56.602151 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:56.602154 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:56.602166 255203 retry.go:31] will retry after 469.77104ms: missing components: kube-dns
I0317 11:04:57.076461 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:04:57.076568 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:57.076587 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:57.076608 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:57.076629 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:57.076647 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:57.076662 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:57.076676 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:57.076690 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:57.076722 255203 retry.go:31] will retry after 656.119155ms: missing components: kube-dns
I0317 11:04:57.736412 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:04:57.736456 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:57.736464 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:57.736474 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:57.736480 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:57.736486 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:57.736491 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:57.736497 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:57.736509 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:57.736526 255203 retry.go:31] will retry after 893.562069ms: missing components: kube-dns
I0317 11:04:58.633942 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:04:58.633986 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:58.633995 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:58.634007 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:58.634014 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:58.634024 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:58.634033 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:58.634038 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:58.634043 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:58.634062 255203 retry.go:31] will retry after 1.122298923s: missing components: kube-dns
I0317 11:04:54.913131 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:56.913269 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:59.413324 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:58.648543 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:00.649803 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:04:59.759953 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:04:59.759986 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:04:59.759992 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:04:59.759999 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:04:59.760004 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:04:59.760008 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:04:59.760011 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:04:59.760015 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:04:59.760018 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:04:59.760030 255203 retry.go:31] will retry after 1.218511595s: missing components: kube-dns
I0317 11:05:00.982785 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:00.982829 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:00.982838 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:00.982845 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:00.982849 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:00.982854 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:00.982857 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:00.982861 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:00.982865 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:00.982880 255203 retry.go:31] will retry after 1.171774567s: missing components: kube-dns
I0317 11:05:02.158314 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:02.158348 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:02.158354 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:02.158360 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:02.158364 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:02.158368 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:02.158372 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:02.158376 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:02.158379 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:02.158391 255203 retry.go:31] will retry after 1.696837803s: missing components: kube-dns
I0317 11:05:03.858863 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:03.858900 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:03.858906 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:03.858915 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:03.858919 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:03.858926 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:03.858930 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:03.858933 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:03.858936 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:03.858949 255203 retry.go:31] will retry after 2.428655233s: missing components: kube-dns
I0317 11:05:01.414244 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:03.915020 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:03.149068 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:05.648866 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:06.291480 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:06.291513 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:06.291519 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:06.291528 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:06.291532 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:06.291537 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:06.291540 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:06.291543 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:06.291546 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:06.291561 255203 retry.go:31] will retry after 2.373974056s: missing components: kube-dns
I0317 11:05:08.669149 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:08.669185 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:08.669191 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:08.669198 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:08.669202 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:08.669207 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:08.669210 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:08.669214 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:08.669217 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:08.669231 255203 retry.go:31] will retry after 2.902944154s: missing components: kube-dns
I0317 11:05:06.413297 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:08.913574 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:07.649064 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:09.649491 261225 pod_ready.go:103] pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:11.149005 261225 pod_ready.go:82] duration metric: took 4m0.005124542s for pod "coredns-668d6bf9bc-vjvg5" in "kube-system" namespace to be "Ready" ...
E0317 11:05:11.149032 261225 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0317 11:05:11.149044 261225 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.150773 261225 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-wht7f" not found
I0317 11:05:11.150799 261225 pod_ready.go:82] duration metric: took 1.746139ms for pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace to be "Ready" ...
E0317 11:05:11.150812 261225 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-wht7f" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-wht7f" not found
I0317 11:05:11.150820 261225 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.154478 261225 pod_ready.go:93] pod "etcd-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
I0317 11:05:11.154495 261225 pod_ready.go:82] duration metric: took 3.667556ms for pod "etcd-kindnet-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.154505 261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.158180 261225 pod_ready.go:93] pod "kube-apiserver-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
I0317 11:05:11.158198 261225 pod_ready.go:82] duration metric: took 3.686563ms for pod "kube-apiserver-kindnet-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.158206 261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.161883 261225 pod_ready.go:93] pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
I0317 11:05:11.161902 261225 pod_ready.go:82] duration metric: took 3.688883ms for pod "kube-controller-manager-kindnet-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.161912 261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-sr64l" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.347703 261225 pod_ready.go:93] pod "kube-proxy-sr64l" in "kube-system" namespace has status "Ready":"True"
I0317 11:05:11.347728 261225 pod_ready.go:82] duration metric: took 185.808929ms for pod "kube-proxy-sr64l" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.347737 261225 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.748058 261225 pod_ready.go:93] pod "kube-scheduler-kindnet-236437" in "kube-system" namespace has status "Ready":"True"
I0317 11:05:11.748080 261225 pod_ready.go:82] duration metric: took 400.336874ms for pod "kube-scheduler-kindnet-236437" in "kube-system" namespace to be "Ready" ...
I0317 11:05:11.748088 261225 pod_ready.go:39] duration metric: took 4m0.610767407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0317 11:05:11.748109 261225 api_server.go:52] waiting for apiserver process to appear ...
I0317 11:05:11.748151 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:05:11.748204 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:05:11.782166 261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
I0317 11:05:11.782194 261225 cri.go:89] found id: ""
I0317 11:05:11.782202 261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
I0317 11:05:11.782250 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:11.785774 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:05:11.785828 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:05:11.818679 261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
I0317 11:05:11.818709 261225 cri.go:89] found id: ""
I0317 11:05:11.818718 261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
I0317 11:05:11.818773 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:11.822242 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:05:11.822313 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:05:11.855724 261225 cri.go:89] found id: ""
I0317 11:05:11.855749 261225 logs.go:282] 0 containers: []
W0317 11:05:11.855757 261225 logs.go:284] No container was found matching "coredns"
I0317 11:05:11.855762 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:05:11.855840 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:05:11.889868 261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
I0317 11:05:11.889895 261225 cri.go:89] found id: ""
I0317 11:05:11.889905 261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
I0317 11:05:11.889968 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:11.893455 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:05:11.893528 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:05:11.930185 261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
I0317 11:05:11.930215 261225 cri.go:89] found id: ""
I0317 11:05:11.930226 261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
I0317 11:05:11.930281 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:11.934085 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:05:11.934163 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:05:11.969461 261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
I0317 11:05:11.969486 261225 cri.go:89] found id: ""
I0317 11:05:11.969495 261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
I0317 11:05:11.969554 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:11.973137 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:05:11.973221 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:05:12.007038 261225 cri.go:89] found id: ""
I0317 11:05:12.007061 261225 logs.go:282] 0 containers: []
W0317 11:05:12.007068 261225 logs.go:284] No container was found matching "kindnet"
I0317 11:05:12.007082 261225 logs.go:123] Gathering logs for dmesg ...
I0317 11:05:12.007094 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:05:12.027405 261225 logs.go:123] Gathering logs for describe nodes ...
I0317 11:05:12.027439 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:05:12.114815 261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
I0317 11:05:12.114845 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
I0317 11:05:12.157696 261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
I0317 11:05:12.157731 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
I0317 11:05:12.195338 261225 logs.go:123] Gathering logs for containerd ...
I0317 11:05:12.195366 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:05:11.576191 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:11.576220 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:11.576226 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:11.576233 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:11.576237 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:11.576241 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:11.576244 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:11.576248 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:11.576250 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:11.576262 255203 retry.go:31] will retry after 5.178275462s: missing components: kube-dns
I0317 11:05:11.413836 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:13.914144 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:14.909964 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:05:14.909983 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:14.909990 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:05:14.909996 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:14.909999 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:05:14.910002 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:05:14.910004 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:05:14.910009 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:05:14.910021 245681 retry.go:31] will retry after 17.363957253s: missing components: kube-dns
I0317 11:05:12.239939 261225 logs.go:123] Gathering logs for kubelet ...
I0317 11:05:12.239978 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:05:12.332451 261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
I0317 11:05:12.332491 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
I0317 11:05:12.375771 261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
I0317 11:05:12.375804 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
I0317 11:05:12.416166 261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
I0317 11:05:12.416200 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
I0317 11:05:12.467570 261225 logs.go:123] Gathering logs for container status ...
I0317 11:05:12.467603 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:05:15.008253 261225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0317 11:05:15.020269 261225 api_server.go:72] duration metric: took 4m4.739086442s to wait for apiserver process to appear ...
I0317 11:05:15.020303 261225 api_server.go:88] waiting for apiserver healthz status ...
I0317 11:05:15.020339 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:05:15.020402 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:05:15.054066 261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
I0317 11:05:15.054088 261225 cri.go:89] found id: ""
I0317 11:05:15.054096 261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
I0317 11:05:15.054147 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:15.057724 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:05:15.057783 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:05:15.090544 261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
I0317 11:05:15.090565 261225 cri.go:89] found id: ""
I0317 11:05:15.090572 261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
I0317 11:05:15.090614 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:15.094062 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:05:15.094127 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:05:15.132281 261225 cri.go:89] found id: ""
I0317 11:05:15.132308 261225 logs.go:282] 0 containers: []
W0317 11:05:15.132319 261225 logs.go:284] No container was found matching "coredns"
I0317 11:05:15.132327 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:05:15.132383 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:05:15.166781 261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
I0317 11:05:15.166825 261225 cri.go:89] found id: ""
I0317 11:05:15.166835 261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
I0317 11:05:15.166893 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:15.170624 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:05:15.170690 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:05:15.203912 261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
I0317 11:05:15.203939 261225 cri.go:89] found id: ""
I0317 11:05:15.203950 261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
I0317 11:05:15.204008 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:15.207632 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:05:15.207715 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:05:15.241079 261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
I0317 11:05:15.241106 261225 cri.go:89] found id: ""
I0317 11:05:15.241117 261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
I0317 11:05:15.241174 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:15.244691 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:05:15.244758 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:05:15.280054 261225 cri.go:89] found id: ""
I0317 11:05:15.280078 261225 logs.go:282] 0 containers: []
W0317 11:05:15.280086 261225 logs.go:284] No container was found matching "kindnet"
I0317 11:05:15.280099 261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
I0317 11:05:15.280111 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
I0317 11:05:15.321837 261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
I0317 11:05:15.321870 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
I0317 11:05:15.364421 261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
I0317 11:05:15.364456 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
I0317 11:05:15.398977 261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
I0317 11:05:15.399005 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
I0317 11:05:15.449068 261225 logs.go:123] Gathering logs for containerd ...
I0317 11:05:15.449101 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:05:15.495271 261225 logs.go:123] Gathering logs for kubelet ...
I0317 11:05:15.495313 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:05:15.584229 261225 logs.go:123] Gathering logs for dmesg ...
I0317 11:05:15.584269 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:05:15.603621 261225 logs.go:123] Gathering logs for describe nodes ...
I0317 11:05:15.603651 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:05:15.689841 261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
I0317 11:05:15.689875 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
I0317 11:05:15.731335 261225 logs.go:123] Gathering logs for container status ...
I0317 11:05:15.731369 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:05:16.758565 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:16.758597 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:16.758603 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:16.758610 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:16.758616 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:16.758622 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:16.758625 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:16.758629 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:16.758633 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:16.758647 255203 retry.go:31] will retry after 4.630324475s: missing components: kube-dns
I0317 11:05:16.413215 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:18.413764 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:18.269898 261225 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0317 11:05:18.274573 261225 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0317 11:05:18.275657 261225 api_server.go:141] control plane version: v1.32.2
I0317 11:05:18.275685 261225 api_server.go:131] duration metric: took 3.255374368s to wait for apiserver health ...
I0317 11:05:18.275696 261225 system_pods.go:43] waiting for kube-system pods to appear ...
I0317 11:05:18.275723 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0317 11:05:18.275782 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0317 11:05:18.308555 261225 cri.go:89] found id: "8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
I0317 11:05:18.308574 261225 cri.go:89] found id: ""
I0317 11:05:18.308581 261225 logs.go:282] 1 containers: [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5]
I0317 11:05:18.308628 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:18.311845 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0317 11:05:18.311901 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0317 11:05:18.344040 261225 cri.go:89] found id: "23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
I0317 11:05:18.344062 261225 cri.go:89] found id: ""
I0317 11:05:18.344079 261225 logs.go:282] 1 containers: [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9]
I0317 11:05:18.344138 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:18.347489 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0317 11:05:18.347549 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0317 11:05:18.382251 261225 cri.go:89] found id: ""
I0317 11:05:18.382272 261225 logs.go:282] 0 containers: []
W0317 11:05:18.382280 261225 logs.go:284] No container was found matching "coredns"
I0317 11:05:18.382286 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0317 11:05:18.382340 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0317 11:05:18.416712 261225 cri.go:89] found id: "e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
I0317 11:05:18.416729 261225 cri.go:89] found id: ""
I0317 11:05:18.416736 261225 logs.go:282] 1 containers: [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997]
I0317 11:05:18.416777 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:18.420319 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0317 11:05:18.420397 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0317 11:05:18.454494 261225 cri.go:89] found id: "97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
I0317 11:05:18.454520 261225 cri.go:89] found id: ""
I0317 11:05:18.454539 261225 logs.go:282] 1 containers: [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7]
I0317 11:05:18.454594 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:18.457995 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0317 11:05:18.458063 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0317 11:05:18.490148 261225 cri.go:89] found id: "26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
I0317 11:05:18.490167 261225 cri.go:89] found id: ""
I0317 11:05:18.490174 261225 logs.go:282] 1 containers: [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405]
I0317 11:05:18.490225 261225 ssh_runner.go:195] Run: which crictl
I0317 11:05:18.493459 261225 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0317 11:05:18.493515 261225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0317 11:05:18.525609 261225 cri.go:89] found id: ""
I0317 11:05:18.525633 261225 logs.go:282] 0 containers: []
W0317 11:05:18.525644 261225 logs.go:284] No container was found matching "kindnet"
I0317 11:05:18.525661 261225 logs.go:123] Gathering logs for kubelet ...
I0317 11:05:18.525676 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0317 11:05:18.611130 261225 logs.go:123] Gathering logs for dmesg ...
I0317 11:05:18.611164 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0317 11:05:18.629424 261225 logs.go:123] Gathering logs for kube-apiserver [8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5] ...
I0317 11:05:18.629451 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a9e08743725766673ff03b16d3d8b9a7cf60931f63a8679ef932c1a96988aa5"
I0317 11:05:18.668784 261225 logs.go:123] Gathering logs for kube-scheduler [e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997] ...
I0317 11:05:18.668814 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e087c22571529ab3f9ebaf72c59368e45e6270402e936e24f7089e1462607997"
I0317 11:05:18.707925 261225 logs.go:123] Gathering logs for kube-proxy [97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7] ...
I0317 11:05:18.707953 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97833d9f535a707e3960692ddc68913cd5b696abcbd7da85e80e270e552544f7"
I0317 11:05:18.745255 261225 logs.go:123] Gathering logs for kube-controller-manager [26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405] ...
I0317 11:05:18.745282 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26fced44f34576bd0eb1aa29d81e14372909319511d2c7b8e03af6b2ef367405"
I0317 11:05:18.792139 261225 logs.go:123] Gathering logs for containerd ...
I0317 11:05:18.792168 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0317 11:05:18.837395 261225 logs.go:123] Gathering logs for describe nodes ...
I0317 11:05:18.837426 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0317 11:05:18.927307 261225 logs.go:123] Gathering logs for etcd [23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9] ...
I0317 11:05:18.927334 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e8fd260b96427c504a42689f33ed707983e5e76e6505c501a47f4ea63d3ef9"
I0317 11:05:18.970538 261225 logs.go:123] Gathering logs for container status ...
I0317 11:05:18.970572 261225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0317 11:05:21.510656 261225 system_pods.go:59] 8 kube-system pods found
I0317 11:05:21.510691 261225 system_pods.go:61] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:21.510697 261225 system_pods.go:61] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:21.510704 261225 system_pods.go:61] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:21.510711 261225 system_pods.go:61] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:21.510715 261225 system_pods.go:61] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:21.510718 261225 system_pods.go:61] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:21.510722 261225 system_pods.go:61] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:21.510725 261225 system_pods.go:61] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:21.510731 261225 system_pods.go:74] duration metric: took 3.235029547s to wait for pod list to return data ...
I0317 11:05:21.510740 261225 default_sa.go:34] waiting for default service account to be created ...
I0317 11:05:21.513446 261225 default_sa.go:45] found service account: "default"
I0317 11:05:21.513476 261225 default_sa.go:55] duration metric: took 2.728168ms for default service account to be created ...
I0317 11:05:21.513489 261225 system_pods.go:116] waiting for k8s-apps to be running ...
I0317 11:05:21.516171 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:21.516197 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:21.516205 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:21.516212 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:21.516216 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:21.516220 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:21.516223 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:21.516226 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:21.516228 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:21.516246 261225 retry.go:31] will retry after 304.55093ms: missing components: kube-dns
I0317 11:05:21.824952 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:21.824993 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:21.825002 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:21.825013 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:21.825018 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:21.825022 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:21.825026 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:21.825031 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:21.825036 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:21.825057 261225 retry.go:31] will retry after 301.434218ms: missing components: kube-dns
I0317 11:05:22.131409 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:22.131455 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:22.131469 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:22.131481 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:22.131487 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:22.131495 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:22.131506 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:22.131511 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:22.131516 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:22.131533 261225 retry.go:31] will retry after 479.197877ms: missing components: kube-dns
I0317 11:05:21.393756 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:21.393798 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:21.393807 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:21.393821 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:21.393831 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:21.393902 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:21.393930 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:21.393940 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:21.393945 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:21.393967 255203 retry.go:31] will retry after 5.810224129s: missing components: kube-dns
I0317 11:05:20.913030 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:23.413886 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:22.613878 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:22.613913 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:22.613921 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:22.613929 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:22.613935 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:22.613941 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:22.613946 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:22.613953 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:22.613958 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:22.613976 261225 retry.go:31] will retry after 442.216978ms: missing components: kube-dns
I0317 11:05:23.059458 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:23.059488 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:23.059494 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:23.059501 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:23.059506 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:23.059512 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:23.059517 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:23.059522 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:23.059530 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:23.059547 261225 retry.go:31] will retry after 657.88959ms: missing components: kube-dns
I0317 11:05:23.721630 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:23.721665 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:23.721673 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:23.721681 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:23.721687 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:23.721693 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:23.721698 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:23.721703 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:23.721712 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:23.721731 261225 retry.go:31] will retry after 610.04653ms: missing components: kube-dns
I0317 11:05:24.335549 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:24.335592 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:24.335603 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:24.335612 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:24.335616 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:24.335623 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:24.335630 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:24.335640 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:24.335647 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:24.335663 261225 retry.go:31] will retry after 985.298595ms: missing components: kube-dns
I0317 11:05:25.325186 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:25.325217 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:25.325223 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:25.325230 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:25.325234 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:25.325238 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:25.325241 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:25.325244 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:25.325247 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:25.325259 261225 retry.go:31] will retry after 980.725261ms: missing components: kube-dns
I0317 11:05:26.309421 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:26.309457 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:26.309465 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:26.309475 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:26.309483 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:26.309494 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:26.309505 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:26.309512 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:26.309518 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:26.309537 261225 retry.go:31] will retry after 1.123138561s: missing components: kube-dns
I0317 11:05:27.208820 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:27.208855 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:27.208862 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:27.208871 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:27.208877 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:27.208883 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:27.208887 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:27.208892 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:27.208898 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:27.208916 255203 retry.go:31] will retry after 8.348805555s: missing components: kube-dns
I0317 11:05:25.912638 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:27.913452 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:27.436613 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:27.436643 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:27.436649 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:27.436657 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:27.436662 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:27.436668 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:27.436674 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:27.436679 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:27.436684 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:27.436702 261225 retry.go:31] will retry after 1.57268651s: missing components: kube-dns
I0317 11:05:29.012826 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:29.012864 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:29.012872 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:29.012882 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:29.012888 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:29.012894 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:29.012898 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:29.012903 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:29.012908 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:29.012925 261225 retry.go:31] will retry after 2.671867502s: missing components: kube-dns
I0317 11:05:31.689143 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:31.689181 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:31.689189 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:31.689199 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:31.689205 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:31.689211 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:31.689216 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:31.689222 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:31.689227 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:31.689246 261225 retry.go:31] will retry after 3.255293189s: missing components: kube-dns
I0317 11:05:30.412494 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:32.412901 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:32.277700 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:05:32.277724 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:32.277731 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:05:32.277740 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:32.277744 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:05:32.277748 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:05:32.277750 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:05:32.277752 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:05:32.277766 245681 retry.go:31] will retry after 23.285243045s: missing components: kube-dns
I0317 11:05:34.948821 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:34.948853 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:34.948859 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:34.948866 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:34.948871 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:34.948875 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:34.948878 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:34.948882 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:34.948886 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:34.948899 261225 retry.go:31] will retry after 3.968980109s: missing components: kube-dns
I0317 11:05:35.562294 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:35.562334 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:35.562342 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:35.562351 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:35.562355 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:35.562359 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:35.562362 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:35.562365 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:35.562369 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:35.562382 255203 retry.go:31] will retry after 10.54807244s: missing components: kube-dns
I0317 11:05:34.912822 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:37.412649 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:38.922353 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:38.922385 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:38.922391 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:38.922399 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:38.922403 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:38.922407 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:38.922411 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:38.922414 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:38.922418 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:38.922432 261225 retry.go:31] will retry after 4.763605942s: missing components: kube-dns
I0317 11:05:39.912457 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:41.912831 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:44.412502 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:43.690391 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:43.690433 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:43.690442 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:43.690454 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:43.690461 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:43.690470 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:43.690479 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:43.690487 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:43.690491 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:43.690509 261225 retry.go:31] will retry after 5.467335218s: missing components: kube-dns
I0317 11:05:46.114496 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:05:46.114535 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:46.114541 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:05:46.114548 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:46.114552 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:05:46.114556 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:05:46.114559 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:05:46.114563 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:05:46.114565 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:05:46.114581 255203 retry.go:31] will retry after 15.508572932s: missing components: kube-dns
I0317 11:05:46.913439 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:48.913558 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:49.162254 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:49.162287 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:49.162293 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:49.162300 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:49.162303 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:49.162309 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:49.162312 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:49.162317 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:49.162321 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:49.162334 261225 retry.go:31] will retry after 5.883169741s: missing components: kube-dns
I0317 11:05:51.412685 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:53.413783 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:55.566604 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:05:55.566623 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:55.566627 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:05:55.566634 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:55.566640 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:05:55.566643 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:05:55.566646 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:05:55.566648 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:05:55.566661 245681 retry.go:31] will retry after 29.32259174s: missing components: kube-dns
I0317 11:05:55.050444 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:05:55.050483 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:05:55.050491 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:05:55.050501 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:05:55.050507 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:05:55.050513 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:05:55.050516 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:05:55.050520 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:05:55.050526 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:05:55.050545 261225 retry.go:31] will retry after 9.352777192s: missing components: kube-dns
I0317 11:05:55.913043 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:05:58.412339 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:01.626663 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:06:01.626695 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:06:01.626700 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:06:01.626708 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:06:01.626712 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:06:01.626716 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:06:01.626720 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:06:01.626723 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:06:01.626726 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:06:01.626740 255203 retry.go:31] will retry after 20.504309931s: missing components: kube-dns
I0317 11:06:00.412947 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:02.912700 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:04.407436 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:06:04.407468 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:06:04.407473 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:06:04.407481 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:06:04.407485 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:06:04.407490 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:06:04.407493 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:06:04.407497 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:06:04.407500 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:06:04.407513 261225 retry.go:31] will retry after 9.592726834s: missing components: kube-dns
I0317 11:06:04.913636 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:07.413198 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:09.413596 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:11.912609 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:13.913341 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:14.003835 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:06:14.003876 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:06:14.003884 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:06:14.003894 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:06:14.003897 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:06:14.003902 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:06:14.003905 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:06:14.003908 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:06:14.003911 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:06:14.003926 261225 retry.go:31] will retry after 15.514429293s: missing components: kube-dns
I0317 11:06:16.412593 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:18.913785 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:22.134045 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:06:22.134078 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:06:22.134083 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:06:22.134091 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:06:22.134095 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:06:22.134099 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:06:22.134105 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:06:22.134108 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:06:22.134111 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:06:22.134125 255203 retry.go:31] will retry after 23.428586225s: missing components: kube-dns
I0317 11:06:21.412772 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:23.412952 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:24.894075 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:06:24.894098 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:06:24.894103 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:06:24.894109 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:06:24.894111 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:06:24.894114 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:06:24.894117 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:06:24.894119 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:06:24.894131 245681 retry.go:31] will retry after 43.021190015s: missing components: kube-dns
I0317 11:06:25.912643 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:27.913773 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:29.522530 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:06:29.522566 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:06:29.522573 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:06:29.522582 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:06:29.522588 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:06:29.522594 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:06:29.522604 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:06:29.522609 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:06:29.522615 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:06:29.522635 261225 retry.go:31] will retry after 19.290967428s: missing components: kube-dns
I0317 11:06:30.412571 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:32.412732 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:34.913454 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:37.412555 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:39.412879 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:41.413880 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:43.913261 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:45.566284 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:06:45.566316 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:06:45.566325 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:06:45.566333 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:06:45.566337 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:06:45.566341 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:06:45.566344 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:06:45.566349 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:06:45.566352 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:06:45.566365 255203 retry.go:31] will retry after 32.86473348s: missing components: kube-dns
I0317 11:06:45.913636 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:48.412720 271403 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:48.816926 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:06:48.816957 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:06:48.816963 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:06:48.816971 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:06:48.816978 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:06:48.816981 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:06:48.816985 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:06:48.816988 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:06:48.816991 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:06:48.817004 261225 retry.go:31] will retry after 26.212373787s: missing components: kube-dns
I0317 11:06:49.912504 271403 pod_ready.go:82] duration metric: took 4m0.004506039s for pod "calico-kube-controllers-77969b7d87-pv6sc" in "kube-system" namespace to be "Ready" ...
E0317 11:06:49.912527 271403 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0317 11:06:49.912535 271403 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-ks7vr" in "kube-system" namespace to be "Ready" ...
I0317 11:06:51.918374 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:53.918973 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:56.418241 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:06:58.418488 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:00.918359 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:03.417624 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:05.418024 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:07.918973 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:07.920403 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:07:07.920426 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:07:07.920434 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:07:07.920443 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:07:07.920448 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:07:07.920452 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:07:07.920456 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:07:07.920459 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:07:07.920475 245681 retry.go:31] will retry after 53.923427957s: missing components: kube-dns
I0317 11:07:10.418272 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:12.918795 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:15.034531 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:07:15.034568 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:07:15.034577 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:07:15.034588 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:07:15.034594 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:07:15.034600 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:07:15.034608 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:07:15.034619 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:07:15.034624 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:07:15.034641 261225 retry.go:31] will retry after 28.925757751s: missing components: kube-dns
I0317 11:07:18.436444 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:07:18.436481 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:07:18.436488 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:07:18.436497 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:07:18.436504 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:07:18.436508 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:07:18.436512 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:07:18.436515 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:07:18.436518 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:07:18.436534 255203 retry.go:31] will retry after 27.149619295s: missing components: kube-dns
I0317 11:07:15.418597 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:17.917419 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:19.918049 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:21.918844 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:24.417950 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:26.918233 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:29.417309 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:31.418245 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:33.418716 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:35.918570 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:38.417408 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:40.417958 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:42.918827 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:43.964848 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:07:43.964882 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:07:43.964889 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:07:43.964898 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:07:43.964903 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:07:43.964907 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:07:43.964911 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:07:43.964914 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:07:43.964917 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:07:43.964929 261225 retry.go:31] will retry after 31.458446993s: missing components: kube-dns
I0317 11:07:45.589503 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:07:45.589536 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:07:45.589543 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:07:45.589554 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:07:45.589560 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:07:45.589567 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:07:45.589573 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:07:45.589580 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:07:45.589586 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:07:45.589608 255203 retry.go:31] will retry after 36.355329469s: missing components: kube-dns
I0317 11:07:45.418196 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:47.919273 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:50.417016 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:52.418046 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:54.418176 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:56.918881 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:07:59.417841 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:01.418036 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:03.917623 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:01.847931 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:08:01.847956 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:08:01.847962 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:08:01.847970 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:08:01.847974 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:08:01.847977 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:08:01.847980 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:08:01.847982 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:08:01.847996 245681 retry.go:31] will retry after 1m1.058602694s: missing components: kube-dns
I0317 11:08:05.918610 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:08.417523 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:10.417892 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:12.418404 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:15.427421 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:08:15.427462 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:08:15.427472 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:08:15.427483 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:08:15.427489 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:08:15.427495 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:08:15.427500 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:08:15.427505 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:08:15.427509 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:08:15.427522 261225 retry.go:31] will retry after 32.96114545s: missing components: kube-dns
I0317 11:08:14.918113 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:17.417020 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:19.417791 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:21.949272 255203 system_pods.go:86] 8 kube-system pods found
I0317 11:08:21.949312 255203 system_pods.go:89] "coredns-668d6bf9bc-rl5k6" [6afd1538-ceb1-450a-94e6-8cde6b141b7a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:08:21.949321 255203 system_pods.go:89] "etcd-auto-236437" [5330d138-2891-454d-b95c-04c0496c4dd0] Running
I0317 11:08:21.949330 255203 system_pods.go:89] "kindnet-n9ln5" [e4806063-235a-4479-854b-b1315437c2d9] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:08:21.949335 255203 system_pods.go:89] "kube-apiserver-auto-236437" [e6df7593-6721-4c69-83f8-1d0b98d63d4d] Running
I0317 11:08:21.949341 255203 system_pods.go:89] "kube-controller-manager-auto-236437" [d6d02699-f43f-43ff-8fd2-63a8762e0933] Running
I0317 11:08:21.949347 255203 system_pods.go:89] "kube-proxy-jcdsz" [16940a83-aa19-49ff-b38d-d149a3d16e82] Running
I0317 11:08:21.949356 255203 system_pods.go:89] "kube-scheduler-auto-236437" [e1a98756-8f1f-4e08-bd38-f32bdb8a139a] Running
I0317 11:08:21.949361 255203 system_pods.go:89] "storage-provisioner" [52ff9fab-6a3d-4eb6-891f-71daeaba07ba] Running
I0317 11:08:21.949381 255203 retry.go:31] will retry after 52.503914166s: missing components: kube-dns
I0317 11:08:21.917617 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:23.917816 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:25.917904 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:28.417956 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:30.917837 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:32.918089 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:34.918753 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:37.417237 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:39.919014 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:42.417529 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:44.418297 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:46.918258 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:48.918688 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:48.392505 261225 system_pods.go:86] 8 kube-system pods found
I0317 11:08:48.392536 261225 system_pods.go:89] "coredns-668d6bf9bc-vjvg5" [03dfb814-0afa-4d74-8034-42ec82dcd27b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:08:48.392542 261225 system_pods.go:89] "etcd-kindnet-236437" [baab6666-fd5f-41eb-a385-c26076689387] Running
I0317 11:08:48.392549 261225 system_pods.go:89] "kindnet-zvsqh" [e1872d0f-e566-4445-a664-c792c2be8985] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:08:48.392555 261225 system_pods.go:89] "kube-apiserver-kindnet-236437" [3168ab04-e887-4ce5-b1ad-85425c4d5faf] Running
I0317 11:08:48.392561 261225 system_pods.go:89] "kube-controller-manager-kindnet-236437" [1f4389db-085b-4109-9f1c-c21840a2807d] Running
I0317 11:08:48.392566 261225 system_pods.go:89] "kube-proxy-sr64l" [82481fe7-dac3-4004-9d3d-dc98ac022576] Running
I0317 11:08:48.392571 261225 system_pods.go:89] "kube-scheduler-kindnet-236437" [440eeeed-8de9-43ef-8637-b15074722801] Running
I0317 11:08:48.392579 261225 system_pods.go:89] "storage-provisioner" [a1e439f4-47da-48b5-b759-bc811dc68109] Running
I0317 11:08:48.392597 261225 retry.go:31] will retry after 40.97829734s: missing components: kube-dns
I0317 11:08:51.417307 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:53.418355 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:55.918117 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:08:57.918410 271403 pod_ready.go:103] pod "calico-node-ks7vr" in "kube-system" namespace has status "Ready":"False"
I0317 11:09:02.910188 245681 system_pods.go:86] 7 kube-system pods found
I0317 11:09:02.910217 245681 system_pods.go:89] "coredns-668d6bf9bc-c7scj" [1f683caa-60d7-44f8-b772-ab187e908994] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0317 11:09:02.910222 245681 system_pods.go:89] "etcd-pause-507725" [c0f23405-4d88-4834-a413-81f6e2d3fed4] Running
I0317 11:09:02.910229 245681 system_pods.go:89] "kindnet-dz8rm" [c7a272d4-8d2d-45e7-af98-bfb37db11888] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0317 11:09:02.910231 245681 system_pods.go:89] "kube-apiserver-pause-507725" [7cdd6bce-9408-46c8-a118-2f8646cbc75e] Running
I0317 11:09:02.910234 245681 system_pods.go:89] "kube-controller-manager-pause-507725" [55d5a993-6a20-4cb9-97b0-e7f676aece73] Running
I0317 11:09:02.910236 245681 system_pods.go:89] "kube-proxy-lmh8d" [5eefd07d-e4cf-4bc5-aecb-262efad90229] Running
I0317 11:09:02.910238 245681 system_pods.go:89] "kube-scheduler-pause-507725" [29bb527f-d065-466f-a475-761673d28cd9] Running
I0317 11:09:02.912217 245681 out.go:201]
W0317 11:09:02.913584 245681 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
W0317 11:09:02.913604 245681 out.go:270] *
W0317 11:09:02.914653 245681 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0317 11:09:02.916068 245681 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
491cadd11c003 f1332858868e1 9 minutes ago Running kube-proxy 0 5195a921f9a59 kube-proxy-lmh8d
d003c8e8dced3 d8e673e7c9983 9 minutes ago Running kube-scheduler 0 17ddba14c205c kube-scheduler-pause-507725
d870fd4dffe56 85b7a174738ba 9 minutes ago Running kube-apiserver 0 b34936203cd4e kube-apiserver-pause-507725
80ceacde36f32 b6a454c5a800d 9 minutes ago Running kube-controller-manager 0 fc94d7ad8d77b kube-controller-manager-pause-507725
5f8e66af286f7 a9e7e6b294baf 9 minutes ago Running etcd 0 2af839fc0332d etcd-pause-507725
==> containerd <==
Mar 17 11:06:22 pause-507725 containerd[875]: time="2025-03-17T11:06:22.125265986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba91e813d618fed827b357e412ac5ea1bac03faf8d25fd213ec6681ba02a3a43\": failed to find network info for sandbox \"ba91e813d618fed827b357e412ac5ea1bac03faf8d25fd213ec6681ba02a3a43\""
Mar 17 11:06:33 pause-507725 containerd[875]: time="2025-03-17T11:06:33.107114133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:06:33 pause-507725 containerd[875]: time="2025-03-17T11:06:33.125571625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03669b507fabefd7a7439c72050b16b52dd1c24e68f434a6895a5006b3a0b19d\": failed to find network info for sandbox \"03669b507fabefd7a7439c72050b16b52dd1c24e68f434a6895a5006b3a0b19d\""
Mar 17 11:06:49 pause-507725 containerd[875]: time="2025-03-17T11:06:49.106897888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:06:49 pause-507725 containerd[875]: time="2025-03-17T11:06:49.125026596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"07f7e01a84e49f010c2d20f74fe4d30f6273f24782bb9d342ae01aeb474367eb\": failed to find network info for sandbox \"07f7e01a84e49f010c2d20f74fe4d30f6273f24782bb9d342ae01aeb474367eb\""
Mar 17 11:07:01 pause-507725 containerd[875]: time="2025-03-17T11:07:01.107832893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:07:01 pause-507725 containerd[875]: time="2025-03-17T11:07:01.127232587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e74da7172b1e465c7c90b97db8563f5ca208449d7cdb777442a8f65a427ca150\": failed to find network info for sandbox \"e74da7172b1e465c7c90b97db8563f5ca208449d7cdb777442a8f65a427ca150\""
Mar 17 11:07:16 pause-507725 containerd[875]: time="2025-03-17T11:07:16.108053001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:07:16 pause-507725 containerd[875]: time="2025-03-17T11:07:16.127421346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87662ee9ccf85daadaa5ed7ab6488905433f3c2f57b53e9682dd13ccff3208d3\": failed to find network info for sandbox \"87662ee9ccf85daadaa5ed7ab6488905433f3c2f57b53e9682dd13ccff3208d3\""
Mar 17 11:07:28 pause-507725 containerd[875]: time="2025-03-17T11:07:28.107127360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:07:28 pause-507725 containerd[875]: time="2025-03-17T11:07:28.126458750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e57c4aea1367ddef178fbffea911b2ab1f61a35f077325a52c43bff5c4e3744\": failed to find network info for sandbox \"1e57c4aea1367ddef178fbffea911b2ab1f61a35f077325a52c43bff5c4e3744\""
Mar 17 11:07:39 pause-507725 containerd[875]: time="2025-03-17T11:07:39.107512780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:07:39 pause-507725 containerd[875]: time="2025-03-17T11:07:39.127005681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9b7dd1f2a06d86dd7856395b32e4178006ac8b9c52903e1080c869be1a0d77e\": failed to find network info for sandbox \"a9b7dd1f2a06d86dd7856395b32e4178006ac8b9c52903e1080c869be1a0d77e\""
Mar 17 11:07:53 pause-507725 containerd[875]: time="2025-03-17T11:07:53.107399913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:07:53 pause-507725 containerd[875]: time="2025-03-17T11:07:53.125483050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e95094bb67a9a22a07d4e8b3eebfd630e0c45391cbffc4d04900540d845e5d10\": failed to find network info for sandbox \"e95094bb67a9a22a07d4e8b3eebfd630e0c45391cbffc4d04900540d845e5d10\""
Mar 17 11:08:06 pause-507725 containerd[875]: time="2025-03-17T11:08:06.107314732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:08:06 pause-507725 containerd[875]: time="2025-03-17T11:08:06.126209721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\": failed to find network info for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\""
Mar 17 11:08:20 pause-507725 containerd[875]: time="2025-03-17T11:08:20.109255755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:08:20 pause-507725 containerd[875]: time="2025-03-17T11:08:20.129381467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\": failed to find network info for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\""
Mar 17 11:08:33 pause-507725 containerd[875]: time="2025-03-17T11:08:33.107167526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:08:33 pause-507725 containerd[875]: time="2025-03-17T11:08:33.126559875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\": failed to find network info for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\""
Mar 17 11:08:46 pause-507725 containerd[875]: time="2025-03-17T11:08:46.107017109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:08:46 pause-507725 containerd[875]: time="2025-03-17T11:08:46.124762851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\": failed to find network info for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\""
Mar 17 11:08:57 pause-507725 containerd[875]: time="2025-03-17T11:08:57.107877216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,}"
Mar 17 11:08:57 pause-507725 containerd[875]: time="2025-03-17T11:08:57.127117051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c7scj,Uid:1f683caa-60d7-44f8-b772-ab187e908994,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\": failed to find network info for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\""
==> describe nodes <==
Name: pause-507725
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-507725
kubernetes.io/os=linux
minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
minikube.k8s.io/name=pause-507725
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_03_17T10_59_30_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 17 Mar 2025 10:59:27 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-507725
AcquireTime: <unset>
RenewTime: Mon, 17 Mar 2025 11:09:02 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 17 Mar 2025 11:07:08 +0000 Mon, 17 Mar 2025 10:59:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 17 Mar 2025 11:07:08 +0000 Mon, 17 Mar 2025 10:59:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 17 Mar 2025 11:07:08 +0000 Mon, 17 Mar 2025 10:59:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 17 Mar 2025 11:07:08 +0000 Mon, 17 Mar 2025 10:59:28 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.103.2
Hostname: pause-507725
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859368Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32859368Ki
pods: 110
System Info:
Machine ID: 9eb763a95d9b4e9fb768130dae7e03ee
System UUID: 8fb7f3f3-791a-47b3-80f7-6ddbcbe87a67
Boot ID: 6cdff8eb-9dff-46dc-b46a-15af38578335
Kernel Version: 5.15.0-1078-gcp
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.25
Kubelet Version: v1.32.2
Kube-Proxy Version: v1.32.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-c7scj 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 9m28s
kube-system etcd-pause-507725 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 9m33s
kube-system kindnet-dz8rm 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 9m28s
kube-system kube-apiserver-pause-507725 250m (3%) 0 (0%) 0 (0%) 0 (0%) 9m33s
kube-system kube-controller-manager-pause-507725 200m (2%) 0 (0%) 0 (0%) 0 (0%) 9m33s
kube-system kube-proxy-lmh8d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m28s
kube-system kube-scheduler-pause-507725 100m (1%) 0 (0%) 0 (0%) 0 (0%) 9m34s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 100m (1%)
memory 220Mi (0%) 220Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 9m28s kube-proxy
Normal NodeHasSufficientMemory 9m39s (x8 over 9m39s) kubelet Node pause-507725 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m39s (x8 over 9m39s) kubelet Node pause-507725 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m39s (x7 over 9m39s) kubelet Node pause-507725 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 9m39s kubelet Updated Node Allocatable limit across pods
Normal Starting 9m34s kubelet Starting kubelet.
Warning CgroupV1 9m34s kubelet cgroup v1 support is in maintenance mode, please migrate to cgroup v2
Normal NodeAllocatableEnforced 9m33s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 9m33s kubelet Node pause-507725 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m33s kubelet Node pause-507725 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m33s kubelet Node pause-507725 status is now: NodeHasSufficientPID
Normal RegisteredNode 9m29s node-controller Node pause-507725 event: Registered Node pause-507725 in Controller
==> dmesg <==
[ +1.010472] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000006] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ -0.000001] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000002] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ +2.011808] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000007] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000001] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ +0.003979] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000006] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ +4.123642] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000007] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000001] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ +8.191265] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000006] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[ +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-81e0001ceae7
[ +0.000002] ll header: 00000000: 6e 6a cf 1c 79 e6 4a 28 c7 6c 46 af 08 00
[Mar17 10:54] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3d29cf6460ef
[ +0.000005] ll header: 00000000: 1e ab 6c 22 c8 11 ee 9e 42 a2 db 99 08 00
[ +1.001464] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-3d29cf6460ef
[ +0.000007] ll header: 00000000: 1e ab 6c 22 c8 11 ee 9e 42 a2 db 99 08 00
[Mar17 10:57] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
==> etcd [5f8e66af286f70c96e001cd3306400e23d18ed7bd8f0219fd761a1a196256bc2] <==
{"level":"info","ts":"2025-03-17T10:59:25.734902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
{"level":"info","ts":"2025-03-17T10:59:25.734943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
{"level":"info","ts":"2025-03-17T10:59:25.734965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
{"level":"info","ts":"2025-03-17T10:59:25.734978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
{"level":"info","ts":"2025-03-17T10:59:25.734992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
{"level":"info","ts":"2025-03-17T10:59:25.735007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
{"level":"info","ts":"2025-03-17T10:59:25.736088Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2025-03-17T10:59:25.736252Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-03-17T10:59:25.736253Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:pause-507725 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
{"level":"info","ts":"2025-03-17T10:59:25.736278Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-03-17T10:59:25.736487Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-03-17T10:59:25.736507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-03-17T10:59:25.736741Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
{"level":"info","ts":"2025-03-17T10:59:25.736854Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-03-17T10:59:25.736980Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-03-17T10:59:25.737134Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-03-17T10:59:25.737247Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-03-17T10:59:25.737836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
{"level":"info","ts":"2025-03-17T10:59:25.737849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-03-17T11:00:05.250083Z","caller":"traceutil/trace.go:171","msg":"trace[382279266] linearizableReadLoop","detail":"{readStateIndex:449; appliedIndex:448; }","duration":"132.68516ms","start":"2025-03-17T11:00:05.117363Z","end":"2025-03-17T11:00:05.250048Z","steps":["trace[382279266] 'read index received' (duration: 71.38496ms)","trace[382279266] 'applied index is now lower than readState.Index' (duration: 61.299182ms)"],"step_count":2}
{"level":"info","ts":"2025-03-17T11:00:05.250247Z","caller":"traceutil/trace.go:171","msg":"trace[257573388] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"135.125308ms","start":"2025-03-17T11:00:05.115098Z","end":"2025-03-17T11:00:05.250224Z","steps":["trace[257573388] 'process raft request' (duration: 73.736927ms)","trace[257573388] 'compare' (duration: 61.043363ms)"],"step_count":2}
{"level":"warn","ts":"2025-03-17T11:00:05.250319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.886951ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kindnet-dz8rm.182d92048ece9662\" limit:1 ","response":"range_response_count:1 size:714"}
{"level":"info","ts":"2025-03-17T11:00:05.250379Z","caller":"traceutil/trace.go:171","msg":"trace[202133278] range","detail":"{range_begin:/registry/events/kube-system/kindnet-dz8rm.182d92048ece9662; range_end:; response_count:1; response_revision:429; }","duration":"133.04748ms","start":"2025-03-17T11:00:05.117321Z","end":"2025-03-17T11:00:05.250368Z","steps":["trace[202133278] 'agreement among raft nodes before linearized reading' (duration: 132.864758ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T11:00:52.279222Z","caller":"traceutil/trace.go:171","msg":"trace[683652511] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"103.989191ms","start":"2025-03-17T11:00:52.175210Z","end":"2025-03-17T11:00:52.279199Z","steps":["trace[683652511] 'process raft request' (duration: 103.867147ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T11:02:29.364796Z","caller":"traceutil/trace.go:171","msg":"trace[1526124624] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"125.268917ms","start":"2025-03-17T11:02:29.239501Z","end":"2025-03-17T11:02:29.364770Z","steps":["trace[1526124624] 'process raft request' (duration: 62.477978ms)","trace[1526124624] 'compare' (duration: 62.665171ms)"],"step_count":2}
==> kernel <==
11:09:04 up 50 min, 0 users, load average: 0.49, 1.03, 1.35
Linux pause-507725 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [d870fd4dffe567986c724732945da8fba9e5abe8c5a8ac011df1b7037f18aa01] <==
I0317 10:59:27.804159 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I0317 10:59:27.803742 1 aggregator.go:171] initial CRD sync complete...
I0317 10:59:27.804698 1 autoregister_controller.go:144] Starting autoregister controller
I0317 10:59:27.804803 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0317 10:59:27.804917 1 cache.go:39] Caches are synced for autoregister controller
I0317 10:59:27.806417 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0317 10:59:27.806454 1 policy_source.go:240] refreshing policies
E0317 10:59:27.807951 1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
I0317 10:59:27.808291 1 controller.go:615] quota admission added evaluator for: namespaces
I0317 10:59:28.012886 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0317 10:59:28.657870 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0317 10:59:28.662425 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0317 10:59:28.662444 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0317 10:59:29.084293 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0317 10:59:29.117199 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0317 10:59:29.214127 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0317 10:59:29.220784 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
I0317 10:59:29.221856 1 controller.go:615] quota admission added evaluator for: endpoints
I0317 10:59:29.225687 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0317 10:59:29.723620 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0317 10:59:30.104918 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0317 10:59:30.117999 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0317 10:59:30.125918 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0317 10:59:35.225517 1 controller.go:615] quota admission added evaluator for: replicasets.apps
I0317 10:59:35.310809 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
==> kube-controller-manager [80ceacde36f323a3c53fe5da9a1f0fa5647bd05b4bd46fb69ea6e12944112718] <==
I0317 10:59:34.274339 1 shared_informer.go:320] Caches are synced for stateful set
I0317 10:59:34.274341 1 shared_informer.go:320] Caches are synced for deployment
I0317 10:59:34.277142 1 shared_informer.go:320] Caches are synced for node
I0317 10:59:34.277193 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I0317 10:59:34.277224 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I0317 10:59:34.277232 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
I0317 10:59:34.277238 1 shared_informer.go:320] Caches are synced for cidrallocator
I0317 10:59:34.278714 1 shared_informer.go:320] Caches are synced for resource quota
I0317 10:59:34.286825 1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-507725" podCIDRs=["10.244.0.0/24"]
I0317 10:59:34.286868 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
I0317 10:59:34.286894 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
I0317 10:59:34.291159 1 shared_informer.go:320] Caches are synced for garbage collector
I0317 10:59:35.215947 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
I0317 10:59:35.427587 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="198.785584ms"
I0317 10:59:35.434273 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="6.638713ms"
I0317 10:59:35.434365 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.51µs"
I0317 10:59:35.441218 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="138.723µs"
I0317 10:59:35.549314 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.571425ms"
I0317 10:59:35.553853 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="4.496661ms"
I0317 10:59:35.553967 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="74.549µs"
I0317 10:59:37.144576 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="59.695µs"
I0317 10:59:37.152066 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="62.37µs"
I0317 10:59:37.154638 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.421µs"
I0317 10:59:40.342920 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
I0317 11:07:08.740472 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-507725"
==> kube-proxy [491cadd11c00385df65a401d3e3b0a2095b3ccde1a5a9848bc51b48de255347c] <==
I0317 10:59:35.790303 1 server_linux.go:66] "Using iptables proxy"
I0317 10:59:35.902940 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
E0317 10:59:35.903001 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0317 10:59:35.925931 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0317 10:59:35.925994 1 server_linux.go:170] "Using iptables Proxier"
I0317 10:59:35.927885 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0317 10:59:35.928374 1 server.go:497] "Version info" version="v1.32.2"
I0317 10:59:35.928408 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0317 10:59:35.929769 1 config.go:199] "Starting service config controller"
I0317 10:59:35.929793 1 config.go:105] "Starting endpoint slice config controller"
I0317 10:59:35.929838 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0317 10:59:35.929910 1 config.go:329] "Starting node config controller"
I0317 10:59:35.929923 1 shared_informer.go:313] Waiting for caches to sync for node config
I0317 10:59:35.929985 1 shared_informer.go:313] Waiting for caches to sync for service config
I0317 10:59:36.030460 1 shared_informer.go:320] Caches are synced for node config
I0317 10:59:36.030476 1 shared_informer.go:320] Caches are synced for service config
I0317 10:59:36.030517 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [d003c8e8dced30ed9bc200a654ee25f66265aa58fb62df9d7251b2492899f373] <==
W0317 10:59:28.625567 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0317 10:59:28.625615 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.644350 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0317 10:59:28.644390 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.660824 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0317 10:59:28.660870 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.668130 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0317 10:59:28.668172 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.727543 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0317 10:59:28.727593 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0317 10:59:28.749160 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0317 10:59:28.749207 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.803771 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0317 10:59:28.803822 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.808292 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0317 10:59:28.808330 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.848911 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0317 10:59:28.848981 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.856388 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0317 10:59:28.856431 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.909858 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0317 10:59:28.909917 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:59:28.927455 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0317 10:59:28.927499 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0317 10:59:30.833445 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 17 11:08:06 pause-507725 kubelet[1653]: E0317 11:08:06.126486 1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\": failed to find network info for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\""
Mar 17 11:08:06 pause-507725 kubelet[1653]: E0317 11:08:06.126561 1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\": failed to find network info for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:06 pause-507725 kubelet[1653]: E0317 11:08:06.126583 1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\": failed to find network info for sandbox \"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:06 pause-507725 kubelet[1653]: E0317 11:08:06.126632 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\\\": failed to find network info for sandbox \\\"2cece04d25a506ff93fdb0bff78098cd9b5c3b3815494653e16549b9f79ceff1\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
Mar 17 11:08:08 pause-507725 kubelet[1653]: E0317 11:08:08.107480 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
Mar 17 11:08:19 pause-507725 kubelet[1653]: E0317 11:08:19.107663 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
Mar 17 11:08:20 pause-507725 kubelet[1653]: E0317 11:08:20.129699 1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\": failed to find network info for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\""
Mar 17 11:08:20 pause-507725 kubelet[1653]: E0317 11:08:20.129772 1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\": failed to find network info for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:20 pause-507725 kubelet[1653]: E0317 11:08:20.129794 1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\": failed to find network info for sandbox \"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:20 pause-507725 kubelet[1653]: E0317 11:08:20.129837 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\\\": failed to find network info for sandbox \\\"a5f327ecf216a3eb3b48af776e1ca22118e271e2207cd080720fc402766a6c7f\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
Mar 17 11:08:31 pause-507725 kubelet[1653]: E0317 11:08:31.107718 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
Mar 17 11:08:33 pause-507725 kubelet[1653]: E0317 11:08:33.126860 1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\": failed to find network info for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\""
Mar 17 11:08:33 pause-507725 kubelet[1653]: E0317 11:08:33.126948 1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\": failed to find network info for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:33 pause-507725 kubelet[1653]: E0317 11:08:33.126976 1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\": failed to find network info for sandbox \"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:33 pause-507725 kubelet[1653]: E0317 11:08:33.127046 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\\\": failed to find network info for sandbox \\\"20894be95d7ab83c1e866a46e8247255c9379c653ced69b9b82f83241591da60\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
Mar 17 11:08:42 pause-507725 kubelet[1653]: E0317 11:08:42.108005 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
Mar 17 11:08:46 pause-507725 kubelet[1653]: E0317 11:08:46.124998 1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\": failed to find network info for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\""
Mar 17 11:08:46 pause-507725 kubelet[1653]: E0317 11:08:46.125086 1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\": failed to find network info for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:46 pause-507725 kubelet[1653]: E0317 11:08:46.125120 1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\": failed to find network info for sandbox \"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:46 pause-507725 kubelet[1653]: E0317 11:08:46.125172 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\\\": failed to find network info for sandbox \\\"9def6658e512ac91e511594d5f26465fbe6c137e614b1425b6b3a299fbfd477a\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
Mar 17 11:08:55 pause-507725 kubelet[1653]: E0317 11:08:55.108214 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-dz8rm" podUID="c7a272d4-8d2d-45e7-af98-bfb37db11888"
Mar 17 11:08:57 pause-507725 kubelet[1653]: E0317 11:08:57.127399 1653 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\": failed to find network info for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\""
Mar 17 11:08:57 pause-507725 kubelet[1653]: E0317 11:08:57.127487 1653 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\": failed to find network info for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:57 pause-507725 kubelet[1653]: E0317 11:08:57.127521 1653 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\": failed to find network info for sandbox \"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\"" pod="kube-system/coredns-668d6bf9bc-c7scj"
Mar 17 11:08:57 pause-507725 kubelet[1653]: E0317 11:08:57.127586 1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c7scj_kube-system(1f683caa-60d7-44f8-b772-ab187e908994)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\\\": failed to find network info for sandbox \\\"691a0098b203b807285d7b05c186b3d83b02c7ce230429ecd7c45dae277e7439\\\"\"" pod="kube-system/coredns-668d6bf9bc-c7scj" podUID="1f683caa-60d7-44f8-b772-ab187e908994"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-507725 -n pause-507725
helpers_test.go:261: (dbg) Run: kubectl --context pause-507725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-c7scj kindnet-dz8rm
helpers_test.go:274: ======> post-mortem[TestPause/serial/Start]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context pause-507725 describe pod coredns-668d6bf9bc-c7scj kindnet-dz8rm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context pause-507725 describe pod coredns-668d6bf9bc-c7scj kindnet-dz8rm: exit status 1 (59.147675ms)
** stderr **
Error from server (NotFound): pods "coredns-668d6bf9bc-c7scj" not found
Error from server (NotFound): pods "kindnet-dz8rm" not found
** /stderr **
helpers_test.go:279: kubectl --context pause-507725 describe pod coredns-668d6bf9bc-c7scj kindnet-dz8rm: exit status 1
--- FAIL: TestPause/serial/Start (593.07s)