=== RUN TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run: docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run: out/minikube-linux-amd64 -p multinode-052675 node start m03 --alsologtostderr
E0128 18:37:48.070356 10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:37:55.646570 10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/ingress-addon-legacy-067754/client.crt: no such file or directory
E0128 18:38:15.756810 10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/functional-017977/client.crt: no such file or directory
E0128 18:38:32.991092 10353 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/addons-266049/client.crt: no such file or directory
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-052675 node start m03 --alsologtostderr: exit status 80 (2m26.010817186s)
-- stdout --
* Starting worker node multinode-052675-m03 in cluster multinode-052675
* Pulling base image ...
* Restarting existing docker container for "multinode-052675-m03" ...
* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0128 18:37:28.768159 140370 out.go:296] Setting OutFile to fd 1 ...
I0128 18:37:28.768346 140370 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 18:37:28.768358 140370 out.go:309] Setting ErrFile to fd 2...
I0128 18:37:28.768363 140370 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 18:37:28.768525 140370 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
I0128 18:37:28.768811 140370 mustload.go:65] Loading cluster: multinode-052675
I0128 18:37:28.769117 140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:28.769503 140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
W0128 18:37:28.792776 140370 host.go:58] "multinode-052675-m03" host status: Stopped
I0128 18:37:28.795620 140370 out.go:177] * Starting worker node multinode-052675-m03 in cluster multinode-052675
I0128 18:37:28.797097 140370 cache.go:120] Beginning downloading kic base image for docker with docker
I0128 18:37:28.798505 140370 out.go:177] * Pulling base image ...
I0128 18:37:28.799824 140370 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0128 18:37:28.799871 140370 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0128 18:37:28.799886 140370 cache.go:57] Caching tarball of preloaded images
I0128 18:37:28.799923 140370 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
I0128 18:37:28.799986 140370 preload.go:174] Found /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0128 18:37:28.799998 140370 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0128 18:37:28.800148 140370 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:37:28.823561 140370 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
I0128 18:37:28.823583 140370 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
I0128 18:37:28.823602 140370 cache.go:193] Successfully downloaded all kic artifacts
I0128 18:37:28.823636 140370 start.go:364] acquiring machines lock for multinode-052675-m03: {Name:mk417407859367a958d60a86e439689c454fd088 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0128 18:37:28.823725 140370 start.go:368] acquired machines lock for "multinode-052675-m03" in 40.859µs
I0128 18:37:28.823755 140370 start.go:96] Skipping create...Using existing machine configuration
I0128 18:37:28.823765 140370 fix.go:55] fixHost starting: m03
I0128 18:37:28.823991 140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
I0128 18:37:28.851728 140370 fix.go:103] recreateIfNeeded on multinode-052675-m03: state=Stopped err=<nil>
W0128 18:37:28.851772 140370 fix.go:129] unexpected machine state, will restart: <nil>
I0128 18:37:28.854195 140370 out.go:177] * Restarting existing docker container for "multinode-052675-m03" ...
I0128 18:37:28.855947 140370 cli_runner.go:164] Run: docker start multinode-052675-m03
I0128 18:37:29.217988 140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
I0128 18:37:29.243480 140370 kic.go:426] container "multinode-052675-m03" state is running.
I0128 18:37:29.243903 140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:29.270969 140370 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:37:29.271203 140370 machine.go:88] provisioning docker machine ...
I0128 18:37:29.271232 140370 ubuntu.go:169] provisioning hostname "multinode-052675-m03"
I0128 18:37:29.271277 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:29.294440 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:29.294650 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:29.294672 140370 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-052675-m03 && echo "multinode-052675-m03" | sudo tee /etc/hostname
I0128 18:37:29.295321 140370 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41948->127.0.0.1:32867: read: connection reset by peer
I0128 18:37:32.436714 140370 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-052675-m03
I0128 18:37:32.436792 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.460540 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:32.460694 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:32.460715 140370 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-052675-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-052675-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-052675-m03' | sudo tee -a /etc/hosts;
fi
fi
I0128 18:37:32.592073 140370 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0128 18:37:32.592124 140370 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3259/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3259/.minikube}
I0128 18:37:32.592149 140370 ubuntu.go:177] setting up certificates
I0128 18:37:32.592156 140370 provision.go:83] configureAuth start
I0128 18:37:32.592205 140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:32.615257 140370 provision.go:138] copyHostCerts
I0128 18:37:32.615326 140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem, removing ...
I0128 18:37:32.615335 140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
I0128 18:37:32.615398 140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem (1082 bytes)
I0128 18:37:32.615486 140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem, removing ...
I0128 18:37:32.615498 140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
I0128 18:37:32.615524 140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem (1123 bytes)
I0128 18:37:32.615567 140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem, removing ...
I0128 18:37:32.615575 140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
I0128 18:37:32.615594 140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem (1679 bytes)
I0128 18:37:32.615630 140370 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem org=jenkins.multinode-052675-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-052675-m03]
I0128 18:37:32.730355 140370 provision.go:172] copyRemoteCerts
I0128 18:37:32.730428 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0128 18:37:32.730461 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.755868 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:32.848031 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0128 18:37:32.867603 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0128 18:37:32.885889 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0128 18:37:32.904961 140370 provision.go:86] duration metric: configureAuth took 312.790194ms
I0128 18:37:32.904990 140370 ubuntu.go:193] setting minikube options for container-runtime
I0128 18:37:32.905181 140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:32.905241 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.930266 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:32.930415 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:32.930429 140370 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0128 18:37:33.061366 140370 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0128 18:37:33.061402 140370 ubuntu.go:71] root file system type: overlay
I0128 18:37:33.061606 140370 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0128 18:37:33.061688 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.087541 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:33.087719 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:33.087814 140370 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0128 18:37:33.230445 140370 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0128 18:37:33.230514 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.256238 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:33.256411 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:33.256474 140370 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0128 18:37:33.392286 140370 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0128 18:37:33.392317 140370 machine.go:91] provisioned docker machine in 4.121098442s
I0128 18:37:33.392328 140370 start.go:300] post-start starting for "multinode-052675-m03" (driver="docker")
I0128 18:37:33.392335 140370 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0128 18:37:33.392399 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0128 18:37:33.392436 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.418787 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.512281 140370 ssh_runner.go:195] Run: cat /etc/os-release
I0128 18:37:33.514993 140370 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0128 18:37:33.515021 140370 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0128 18:37:33.515039 140370 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0128 18:37:33.515047 140370 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0128 18:37:33.515065 140370 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/addons for local assets ...
I0128 18:37:33.515125 140370 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/files for local assets ...
I0128 18:37:33.515207 140370 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> 103532.pem in /etc/ssl/certs
I0128 18:37:33.515300 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0128 18:37:33.522350 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:37:33.541225 140370 start.go:303] post-start completed in 148.881332ms
I0128 18:37:33.541302 140370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0128 18:37:33.541341 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.567063 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.656837 140370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0128 18:37:33.660572 140370 fix.go:57] fixHost completed within 4.836799887s
I0128 18:37:33.660596 140370 start.go:83] releasing machines lock for "multinode-052675-m03", held for 4.836857796s
I0128 18:37:33.660659 140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:33.683972 140370 ssh_runner.go:195] Run: systemctl --version
I0128 18:37:33.684004 140370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0128 18:37:33.684023 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.684051 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.710480 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.711899 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.800897 140370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0128 18:37:33.836566 140370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0128 18:37:33.853295 140370 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0128 18:37:33.853399 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0128 18:37:33.860387 140370 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0128 18:37:33.874296 140370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0128 18:37:33.881997 140370 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0128 18:37:33.882027 140370 start.go:483] detecting cgroup driver to use...
I0128 18:37:33.882056 140370 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:37:33.882204 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:37:33.895781 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0128 18:37:33.904320 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0128 18:37:33.912939 140370 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0128 18:37:33.912987 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0128 18:37:33.922779 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:37:33.930843 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0128 18:37:33.938894 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:37:33.947415 140370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0128 18:37:33.955190 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0128 18:37:33.963495 140370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0128 18:37:33.969954 140370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0128 18:37:33.976395 140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:34.066470 140370 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0128 18:37:34.145520 140370 start.go:483] detecting cgroup driver to use...
I0128 18:37:34.145571 140370 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:37:34.145629 140370 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0128 18:37:34.155619 140370 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0128 18:37:34.155677 140370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0128 18:37:34.164697 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:37:34.179339 140370 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0128 18:37:34.287521 140370 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0128 18:37:34.395438 140370 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0128 18:37:34.395467 140370 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0128 18:37:34.408864 140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:34.487254 140370 ssh_runner.go:195] Run: sudo systemctl restart docker
I0128 18:37:34.716580 140370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:37:34.796937 140370 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0128 18:37:34.876051 140370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:37:34.951893 140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:35.035868 140370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0128 18:37:35.052109 140370 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0128 18:37:35.052172 140370 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0128 18:37:35.055415 140370 start.go:551] Will wait 60s for crictl version
I0128 18:37:35.055467 140370 ssh_runner.go:195] Run: which crictl
I0128 18:37:35.058181 140370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0128 18:37:35.135807 140370 start.go:567] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0128 18:37:35.135864 140370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:37:35.161958 140370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:37:35.193909 140370 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0128 18:37:35.194009 140370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0128 18:37:35.291519 140370 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-28 18:37:35.214249226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0128 18:37:35.291637 140370 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0128 18:37:35.313607 140370 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0128 18:37:35.317298 140370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:37:35.326968 140370 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675 for IP: 192.168.58.4
I0128 18:37:35.327018 140370 certs.go:186] acquiring lock for shared ca certs: {Name:mk283707adcbf18cf93dab5399aa9ec0bae25e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:37:35.327144 140370 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key
I0128 18:37:35.327197 140370 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key
I0128 18:37:35.327263 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem (1338 bytes)
W0128 18:37:35.327288 140370 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353_empty.pem, impossibly tiny 0 bytes
I0128 18:37:35.327300 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem (1675 bytes)
I0128 18:37:35.327326 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem (1082 bytes)
I0128 18:37:35.327349 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem (1123 bytes)
I0128 18:37:35.327368 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem (1679 bytes)
I0128 18:37:35.327402 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:37:35.327967 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0128 18:37:35.345516 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0128 18:37:35.363369 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0128 18:37:35.380552 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0128 18:37:35.397674 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem --> /usr/share/ca-certificates/10353.pem (1338 bytes)
I0128 18:37:35.416171 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /usr/share/ca-certificates/103532.pem (1708 bytes)
I0128 18:37:35.435443 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0128 18:37:35.452809 140370 ssh_runner.go:195] Run: openssl version
I0128 18:37:35.457757 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10353.pem && ln -fs /usr/share/ca-certificates/10353.pem /etc/ssl/certs/10353.pem"
I0128 18:37:35.465226 140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10353.pem
I0128 18:37:35.468203 140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
I0128 18:37:35.468250 140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10353.pem
I0128 18:37:35.472911 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10353.pem /etc/ssl/certs/51391683.0"
I0128 18:37:35.479788 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103532.pem && ln -fs /usr/share/ca-certificates/103532.pem /etc/ssl/certs/103532.pem"
I0128 18:37:35.487495 140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103532.pem
I0128 18:37:35.491144 140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
I0128 18:37:35.491199 140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103532.pem
I0128 18:37:35.496365 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103532.pem /etc/ssl/certs/3ec20f2e.0"
I0128 18:37:35.503586 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0128 18:37:35.511636 140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.515214 140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.515271 140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.520350 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0128 18:37:35.527590 140370 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0128 18:37:35.595260 140370 cni.go:84] Creating CNI manager for ""
I0128 18:37:35.595281 140370 cni.go:136] 3 nodes found, recommending kindnet
I0128 18:37:35.595290 140370 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0128 18:37:35.595309 140370 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-052675 NodeName:multinode-052675-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0128 18:37:35.595443 140370 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-052675-m03"
kubeletExtraArgs:
node-ip: 192.168.58.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0128 18:37:35.595530 140370 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-052675-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0128 18:37:35.595574 140370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0128 18:37:35.604342 140370 binaries.go:44] Found k8s binaries, skipping transfer
I0128 18:37:35.604401 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0128 18:37:35.610782 140370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0128 18:37:35.623062 140370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0128 18:37:35.635745 140370 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0128 18:37:35.638800 140370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:37:35.647978 140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.648027 140370 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0128 18:37:35.648143 140370 addons.go:65] Setting storage-provisioner=true in profile "multinode-052675"
I0128 18:37:35.648165 140370 addons.go:227] Setting addon storage-provisioner=true in "multinode-052675"
I0128 18:37:35.648166 140370 addons.go:65] Setting default-storageclass=true in profile "multinode-052675"
I0128 18:37:35.648194 140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:35.648219 140370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-052675"
W0128 18:37:35.648174 140370 addons.go:236] addon storage-provisioner should already be in state true
I0128 18:37:35.648229 140370 start.go:299] JoinCluster: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 18:37:35.648342 140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.648354 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0128 18:37:35.648403 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.648554 140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.648785 140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.678753 140370 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0128 18:37:35.677915 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.680756 140370 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0128 18:37:35.680780 140370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0128 18:37:35.680841 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.695309 140370 addons.go:227] Setting addon default-storageclass=true in "multinode-052675"
W0128 18:37:35.695331 140370 addons.go:236] addon default-storageclass should already be in state true
I0128 18:37:35.695353 140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.695742 140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.710623 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.723088 140370 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0128 18:37:35.723114 140370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0128 18:37:35.723171 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.749745 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.818947 140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0128 18:37:35.833740 140370 start.go:312] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:35.833792 140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.834086 140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-052675-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
I0128 18:37:35.834128 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.855911 140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0128 18:37:35.866405 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:36.216130 140370 node.go:109] successfully drained node "m03"
I0128 18:37:36.218504 140370 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0128 18:37:36.220255 140370 addons.go:492] enable addons completed in 572.236601ms: enabled=[storage-provisioner default-storageclass]
I0128 18:37:36.220401 140370 node.go:125] successfully deleted node "m03"
I0128 18:37:36.220416 140370 start.go:316] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:36.220437 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:36.220487 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:37:36.384147 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:37:36.255380 1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:36.384173 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:37:36.384189 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:37:36.422394 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:36.422421 140370 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:37:36.255380 1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.470499 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:47.470589 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:37:47.620550 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:37:47.507087 1660 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.620577 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:37:47.620591 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:37:47.656587 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.656613 140370 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:37:47.507087 1660 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.265264 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:38:09.265318 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:38:09.421323 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:38:09.302407 2144 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.421352 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:38:09.421365 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:38:09.458262 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.458304 140370 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:38:09.302407 2144 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.661652 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:38:35.661716 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:38:35.817509 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:38:35.697493 2440 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.817536 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:38:35.817547 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:38:35.855576 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.855612 140370 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:38:35.697493 2440 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.504180 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:39:07.504247 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:39:07.655353 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:07.539815 2746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.655375 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:39:07.655389 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:39:07.694454 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.694486 140370 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:07.539815 2746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.504816 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:39:54.504888 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:39:54.657786 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:54.540796 3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.657811 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:39:54.657827 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:39:54.694526 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.694560 140370 start.go:301] JoinCluster complete in 2m19.046332183s
I0128 18:39:54.697658 140370 out.go:177]
W0128 18:39:54.699334 140370 out.go:239] X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:54.540796 3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:54.540796 3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
W0128 18:39:54.699351 140370 out.go:239] *
*
W0128 18:39:54.701288 140370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0128 18:39:54.703217 140370 out.go:177]
** /stderr **
multinode_test.go:254: I0128 18:37:28.768159 140370 out.go:296] Setting OutFile to fd 1 ...
I0128 18:37:28.768346 140370 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 18:37:28.768358 140370 out.go:309] Setting ErrFile to fd 2...
I0128 18:37:28.768363 140370 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 18:37:28.768525 140370 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
I0128 18:37:28.768811 140370 mustload.go:65] Loading cluster: multinode-052675
I0128 18:37:28.769117 140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:28.769503 140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
W0128 18:37:28.792776 140370 host.go:58] "multinode-052675-m03" host status: Stopped
I0128 18:37:28.795620 140370 out.go:177] * Starting worker node multinode-052675-m03 in cluster multinode-052675
I0128 18:37:28.797097 140370 cache.go:120] Beginning downloading kic base image for docker with docker
I0128 18:37:28.798505 140370 out.go:177] * Pulling base image ...
I0128 18:37:28.799824 140370 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0128 18:37:28.799871 140370 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0128 18:37:28.799886 140370 cache.go:57] Caching tarball of preloaded images
I0128 18:37:28.799923 140370 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
I0128 18:37:28.799986 140370 preload.go:174] Found /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0128 18:37:28.799998 140370 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0128 18:37:28.800148 140370 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:37:28.823561 140370 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
I0128 18:37:28.823583 140370 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
I0128 18:37:28.823602 140370 cache.go:193] Successfully downloaded all kic artifacts
I0128 18:37:28.823636 140370 start.go:364] acquiring machines lock for multinode-052675-m03: {Name:mk417407859367a958d60a86e439689c454fd088 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0128 18:37:28.823725 140370 start.go:368] acquired machines lock for "multinode-052675-m03" in 40.859µs
I0128 18:37:28.823755 140370 start.go:96] Skipping create...Using existing machine configuration
I0128 18:37:28.823765 140370 fix.go:55] fixHost starting: m03
I0128 18:37:28.823991 140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
I0128 18:37:28.851728 140370 fix.go:103] recreateIfNeeded on multinode-052675-m03: state=Stopped err=<nil>
W0128 18:37:28.851772 140370 fix.go:129] unexpected machine state, will restart: <nil>
I0128 18:37:28.854195 140370 out.go:177] * Restarting existing docker container for "multinode-052675-m03" ...
I0128 18:37:28.855947 140370 cli_runner.go:164] Run: docker start multinode-052675-m03
I0128 18:37:29.217988 140370 cli_runner.go:164] Run: docker container inspect multinode-052675-m03 --format={{.State.Status}}
I0128 18:37:29.243480 140370 kic.go:426] container "multinode-052675-m03" state is running.
I0128 18:37:29.243903 140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:29.270969 140370 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:37:29.271203 140370 machine.go:88] provisioning docker machine ...
I0128 18:37:29.271232 140370 ubuntu.go:169] provisioning hostname "multinode-052675-m03"
I0128 18:37:29.271277 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:29.294440 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:29.294650 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:29.294672 140370 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-052675-m03 && echo "multinode-052675-m03" | sudo tee /etc/hostname
I0128 18:37:29.295321 140370 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41948->127.0.0.1:32867: read: connection reset by peer
I0128 18:37:32.436714 140370 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-052675-m03
I0128 18:37:32.436792 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.460540 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:32.460694 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:32.460715 140370 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-052675-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-052675-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-052675-m03' | sudo tee -a /etc/hosts;
fi
fi
I0128 18:37:32.592073 140370 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0128 18:37:32.592124 140370 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3259/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3259/.minikube}
I0128 18:37:32.592149 140370 ubuntu.go:177] setting up certificates
I0128 18:37:32.592156 140370 provision.go:83] configureAuth start
I0128 18:37:32.592205 140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:32.615257 140370 provision.go:138] copyHostCerts
I0128 18:37:32.615326 140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem, removing ...
I0128 18:37:32.615335 140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
I0128 18:37:32.615398 140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem (1082 bytes)
I0128 18:37:32.615486 140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem, removing ...
I0128 18:37:32.615498 140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
I0128 18:37:32.615524 140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem (1123 bytes)
I0128 18:37:32.615567 140370 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem, removing ...
I0128 18:37:32.615575 140370 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
I0128 18:37:32.615594 140370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem (1679 bytes)
I0128 18:37:32.615630 140370 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem org=jenkins.multinode-052675-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-052675-m03]
I0128 18:37:32.730355 140370 provision.go:172] copyRemoteCerts
I0128 18:37:32.730428 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0128 18:37:32.730461 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.755868 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:32.848031 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0128 18:37:32.867603 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0128 18:37:32.885889 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0128 18:37:32.904961 140370 provision.go:86] duration metric: configureAuth took 312.790194ms
I0128 18:37:32.904990 140370 ubuntu.go:193] setting minikube options for container-runtime
I0128 18:37:32.905181 140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:32.905241 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:32.930266 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:32.930415 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:32.930429 140370 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0128 18:37:33.061366 140370 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0128 18:37:33.061402 140370 ubuntu.go:71] root file system type: overlay
I0128 18:37:33.061606 140370 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0128 18:37:33.061688 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.087541 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:33.087719 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:33.087814 140370 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0128 18:37:33.230445 140370 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0128 18:37:33.230514 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.256238 140370 main.go:141] libmachine: Using SSH client type: native
I0128 18:37:33.256411 140370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0128 18:37:33.256474 140370 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0128 18:37:33.392286 140370 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0128 18:37:33.392317 140370 machine.go:91] provisioned docker machine in 4.121098442s
I0128 18:37:33.392328 140370 start.go:300] post-start starting for "multinode-052675-m03" (driver="docker")
I0128 18:37:33.392335 140370 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0128 18:37:33.392399 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0128 18:37:33.392436 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.418787 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.512281 140370 ssh_runner.go:195] Run: cat /etc/os-release
I0128 18:37:33.514993 140370 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0128 18:37:33.515021 140370 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0128 18:37:33.515039 140370 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0128 18:37:33.515047 140370 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0128 18:37:33.515065 140370 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/addons for local assets ...
I0128 18:37:33.515125 140370 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/files for local assets ...
I0128 18:37:33.515207 140370 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> 103532.pem in /etc/ssl/certs
I0128 18:37:33.515300 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0128 18:37:33.522350 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:37:33.541225 140370 start.go:303] post-start completed in 148.881332ms
I0128 18:37:33.541302 140370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0128 18:37:33.541341 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.567063 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.656837 140370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0128 18:37:33.660572 140370 fix.go:57] fixHost completed within 4.836799887s
I0128 18:37:33.660596 140370 start.go:83] releasing machines lock for "multinode-052675-m03", held for 4.836857796s
I0128 18:37:33.660659 140370 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m03
I0128 18:37:33.683972 140370 ssh_runner.go:195] Run: systemctl --version
I0128 18:37:33.684004 140370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0128 18:37:33.684023 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.684051 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m03
I0128 18:37:33.710480 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.711899 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m03/id_rsa Username:docker}
I0128 18:37:33.800897 140370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0128 18:37:33.836566 140370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0128 18:37:33.853295 140370 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0128 18:37:33.853399 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0128 18:37:33.860387 140370 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0128 18:37:33.874296 140370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0128 18:37:33.881997 140370 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0128 18:37:33.882027 140370 start.go:483] detecting cgroup driver to use...
I0128 18:37:33.882056 140370 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:37:33.882204 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:37:33.895781 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0128 18:37:33.904320 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0128 18:37:33.912939 140370 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0128 18:37:33.912987 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0128 18:37:33.922779 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:37:33.930843 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0128 18:37:33.938894 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:37:33.947415 140370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0128 18:37:33.955190 140370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0128 18:37:33.963495 140370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0128 18:37:33.969954 140370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0128 18:37:33.976395 140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:34.066470 140370 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0128 18:37:34.145520 140370 start.go:483] detecting cgroup driver to use...
I0128 18:37:34.145571 140370 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:37:34.145629 140370 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0128 18:37:34.155619 140370 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0128 18:37:34.155677 140370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0128 18:37:34.164697 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:37:34.179339 140370 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0128 18:37:34.287521 140370 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0128 18:37:34.395438 140370 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0128 18:37:34.395467 140370 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0128 18:37:34.408864 140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:34.487254 140370 ssh_runner.go:195] Run: sudo systemctl restart docker
I0128 18:37:34.716580 140370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:37:34.796937 140370 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0128 18:37:34.876051 140370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:37:34.951893 140370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:37:35.035868 140370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0128 18:37:35.052109 140370 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0128 18:37:35.052172 140370 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0128 18:37:35.055415 140370 start.go:551] Will wait 60s for crictl version
I0128 18:37:35.055467 140370 ssh_runner.go:195] Run: which crictl
I0128 18:37:35.058181 140370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0128 18:37:35.135807 140370 start.go:567] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0128 18:37:35.135864 140370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:37:35.161958 140370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:37:35.193909 140370 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0128 18:37:35.194009 140370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0128 18:37:35.291519 140370 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-28 18:37:35.214249226 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0128 18:37:35.291637 140370 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0128 18:37:35.313607 140370 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0128 18:37:35.317298 140370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:37:35.326968 140370 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675 for IP: 192.168.58.4
I0128 18:37:35.327018 140370 certs.go:186] acquiring lock for shared ca certs: {Name:mk283707adcbf18cf93dab5399aa9ec0bae25e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:37:35.327144 140370 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key
I0128 18:37:35.327197 140370 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key
I0128 18:37:35.327263 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem (1338 bytes)
W0128 18:37:35.327288 140370 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353_empty.pem, impossibly tiny 0 bytes
I0128 18:37:35.327300 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem (1675 bytes)
I0128 18:37:35.327326 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem (1082 bytes)
I0128 18:37:35.327349 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem (1123 bytes)
I0128 18:37:35.327368 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem (1679 bytes)
I0128 18:37:35.327402 140370 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:37:35.327967 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0128 18:37:35.345516 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0128 18:37:35.363369 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0128 18:37:35.380552 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0128 18:37:35.397674 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem --> /usr/share/ca-certificates/10353.pem (1338 bytes)
I0128 18:37:35.416171 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /usr/share/ca-certificates/103532.pem (1708 bytes)
I0128 18:37:35.435443 140370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0128 18:37:35.452809 140370 ssh_runner.go:195] Run: openssl version
I0128 18:37:35.457757 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10353.pem && ln -fs /usr/share/ca-certificates/10353.pem /etc/ssl/certs/10353.pem"
I0128 18:37:35.465226 140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10353.pem
I0128 18:37:35.468203 140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
I0128 18:37:35.468250 140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10353.pem
I0128 18:37:35.472911 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10353.pem /etc/ssl/certs/51391683.0"
I0128 18:37:35.479788 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103532.pem && ln -fs /usr/share/ca-certificates/103532.pem /etc/ssl/certs/103532.pem"
I0128 18:37:35.487495 140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103532.pem
I0128 18:37:35.491144 140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
I0128 18:37:35.491199 140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103532.pem
I0128 18:37:35.496365 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103532.pem /etc/ssl/certs/3ec20f2e.0"
I0128 18:37:35.503586 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0128 18:37:35.511636 140370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.515214 140370 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.515271 140370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0128 18:37:35.520350 140370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0128 18:37:35.527590 140370 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0128 18:37:35.595260 140370 cni.go:84] Creating CNI manager for ""
I0128 18:37:35.595281 140370 cni.go:136] 3 nodes found, recommending kindnet
I0128 18:37:35.595290 140370 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0128 18:37:35.595309 140370 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-052675 NodeName:multinode-052675-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0128 18:37:35.595443 140370 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-052675-m03"
kubeletExtraArgs:
node-ip: 192.168.58.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0128 18:37:35.595530 140370 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-052675-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0128 18:37:35.595574 140370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0128 18:37:35.604342 140370 binaries.go:44] Found k8s binaries, skipping transfer
I0128 18:37:35.604401 140370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0128 18:37:35.610782 140370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0128 18:37:35.623062 140370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0128 18:37:35.635745 140370 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0128 18:37:35.638800 140370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:37:35.647978 140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.648027 140370 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0128 18:37:35.648143 140370 addons.go:65] Setting storage-provisioner=true in profile "multinode-052675"
I0128 18:37:35.648165 140370 addons.go:227] Setting addon storage-provisioner=true in "multinode-052675"
I0128 18:37:35.648166 140370 addons.go:65] Setting default-storageclass=true in profile "multinode-052675"
I0128 18:37:35.648194 140370 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:37:35.648219 140370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-052675"
W0128 18:37:35.648174 140370 addons.go:236] addon storage-provisioner should already be in state true
I0128 18:37:35.648229 140370 start.go:299] JoinCluster: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 18:37:35.648342 140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.648354 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0128 18:37:35.648403 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.648554 140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.648785 140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.678753 140370 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0128 18:37:35.677915 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.680756 140370 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0128 18:37:35.680780 140370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0128 18:37:35.680841 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.695309 140370 addons.go:227] Setting addon default-storageclass=true in "multinode-052675"
W0128 18:37:35.695331 140370 addons.go:236] addon default-storageclass should already be in state true
I0128 18:37:35.695353 140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.695742 140370 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:37:35.710623 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.723088 140370 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0128 18:37:35.723114 140370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0128 18:37:35.723171 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.749745 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:35.818947 140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0128 18:37:35.833740 140370 start.go:312] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:35.833792 140370 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:37:35.834086 140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-052675-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
I0128 18:37:35.834128 140370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:37:35.855911 140370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0128 18:37:35.866405 140370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:37:36.216130 140370 node.go:109] successfully drained node "m03"
I0128 18:37:36.218504 140370 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0128 18:37:36.220255 140370 addons.go:492] enable addons completed in 572.236601ms: enabled=[storage-provisioner default-storageclass]
I0128 18:37:36.220401 140370 node.go:125] successfully deleted node "m03"
I0128 18:37:36.220416 140370 start.go:316] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:36.220437 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:36.220487 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:37:36.384147 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:37:36.255380 1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:36.384173 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:37:36.384189 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:37:36.422394 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:36.422421 140370 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:37:36.255380 1428 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.470499 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:37:47.470589 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:37:47.620550 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:37:47.507087 1660 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.620577 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:37:47.620591 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:37:47.656587 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:37:47.656613 140370 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:37:47.507087 1660 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.265264 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:38:09.265318 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:38:09.421323 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:38:09.302407 2144 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.421352 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:38:09.421365 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:38:09.458262 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:09.458304 140370 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:38:09.302407 2144 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.661652 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:38:35.661716 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:38:35.817509 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:38:35.697493 2440 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.817536 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:38:35.817547 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:38:35.855576 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:38:35.855612 140370 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:38:35.697493 2440 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.504180 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:39:07.504247 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:39:07.655353 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:07.539815 2746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.655375 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:39:07.655389 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:39:07.694454 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:07.694486 140370 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:07.539815 2746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.504816 140370 start.go:320] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0128 18:39:54.504888 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03"
E0128 18:39:54.657786 140370 start.go:322] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:54.540796 3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.657811 140370 start.go:325] resetting worker node "m03" before attempting to rejoin cluster...
I0128 18:39:54.657827 140370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0128 18:39:54.694526 140370 start.go:327] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0128 18:39:54.694560 140370 start.go:301] JoinCluster complete in 2m19.046332183s
I0128 18:39:54.697658 140370 out.go:177]
W0128 18:39:54.699334 140370 out.go:239] X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:54.540796 3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv2kvw.w468jr8o9qicj0iv --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0128 18:39:54.540796 3161 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-052675-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
W0128 18:39:54.699351 140370 out.go:239] *
*
W0128 18:39:54.701288 140370 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0128 18:39:54.703217 140370 out.go:177]
multinode_test.go:255: node start returned an error. args "out/minikube-linux-amd64 -p multinode-052675 node start m03 --alsologtostderr": exit status 80
multinode_test.go:259: (dbg) Run: out/minikube-linux-amd64 -p multinode-052675 status
multinode_test.go:273: (dbg) Run: kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect multinode-052675
helpers_test.go:235: (dbg) docker inspect multinode-052675:
-- stdout --
[
{
"Id": "314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6",
"Created": "2023-01-28T18:35:44.778914376Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 122257,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-01-28T18:35:45.14907195Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
"ResolvConfPath": "/var/lib/docker/containers/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/hostname",
"HostsPath": "/var/lib/docker/containers/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/hosts",
"LogPath": "/var/lib/docker/containers/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6/314f7839c3cec39a48ea707252ded475868deab2bbff865b2a2ec7a183d109c6-json.log",
"Name": "/multinode-052675",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"multinode-052675:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {
"max-size": "100m"
}
},
"NetworkMode": "multinode-052675",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/0d592b07575af3ae0d650b98fdc151a01a60882b16ca6fbdf2b5ab602c6e88f5-init/diff:/var/lib/docker/overlay2/db391ee9d0a42f7dc5df56df5db62b059d8e193980adf15a88c06e73cfc1e11a/diff:/var/lib/docker/overlay2/e6a847e0ebf9467b2ce5842728c2091e03878a25d813278a725211251a8a0eae/diff:/var/lib/docker/overlay2/32b8245ada3251dc013f140506c5240693363e8c2c9707bb1f2bd97a299c1c9c/diff:/var/lib/docker/overlay2/b82b7f6425d78cea023899c86c4008c827442cea441cb667b37154bbc2d24d2a/diff:/var/lib/docker/overlay2/c46a484250fda920ad973923a47eec6875fb83c5c8ffe4014447a7388adfa158/diff:/var/lib/docker/overlay2/4fd484a57f89f1beb796ce3c7e4df2d30f538b8058da22375106e2a23238713b/diff:/var/lib/docker/overlay2/c69e17070e6c00742f533cdd19089ef2f300b9182f899365e138db9a76b96add/diff:/var/lib/docker/overlay2/a89cd341d5705704d306d02fd86e7ff2f35e0d9ed2e500ac4c92f559d7f9508c/diff:/var/lib/docker/overlay2/460f41c732ad36df327a55d31cece26dad7009e8668de7190d704b3b155d9da4/diff:/var/lib/docker/overlay2/d4f3b8
89378af2d93d8e76850ebeadbcf0c8e9306d6547fb27c0ebb4fed72f10/diff:/var/lib/docker/overlay2/ca8448ea6755a2c2089fa9b41e21d9b4e343d18e866ffdf4e6860c5f5a589253/diff:/var/lib/docker/overlay2/c24c620026d1ca52eb96ff56568a2dd6bc302ff4afa648f8aef8f10ed2ece07b/diff:/var/lib/docker/overlay2/8ac88d56c0f846c2cf3cac5a490d2fb5e20b27161cfd03efcef725215ae3b441/diff:/var/lib/docker/overlay2/0c1b370b7889964a315e82599d29af07536dc272e6858778fd37b33783ba23e8/diff:/var/lib/docker/overlay2/a67314cc1f9da41da9764c7e038fc2cf0f488077a02f985c55f3a98eedd674e0/diff:/var/lib/docker/overlay2/076f5646fa2e7d1a370061474f260221170e0360913a62e429e22b2064e424da/diff:/var/lib/docker/overlay2/47411db3bf4ad8949b8540ea70897d62aa890be3526965fea1dc8c204272c55f/diff:/var/lib/docker/overlay2/8e1e48bf4dc814cd33ebbc6c4a395f3a538f828c7fb0a89e284518636cba1eeb/diff:/var/lib/docker/overlay2/595065ee241a515f497552c7317fadeffa0059d878cbca549866fd353e111815/diff:/var/lib/docker/overlay2/67d36d8ba6c4af51e5fd4c0c2158a8b0a27ce4d12690a8c640df17a49c7d9943/diff:/var/lib/d
ocker/overlay2/d65e9183bc7192d5f984a03a3305bde47985b124f97845ca8aa69614b496f11e/diff:/var/lib/docker/overlay2/f077ef7e752361f549e2bcff923cd9656d9422659019f479d6f31e6aaf138f2d/diff:/var/lib/docker/overlay2/2c86b185414bf11369f21dc9b85f647448d3cb231a389150d886c71a0ca4b421/diff:/var/lib/docker/overlay2/a33763e169f5c1e558d5c22af762002faee9030c7345e94863fedad26dec97d9/diff:/var/lib/docker/overlay2/46f61207484849cc704271281accc52f51d5b60521321d23f35f81f9bb0e4a77/diff:/var/lib/docker/overlay2/95df6666d99483dc3a2800026c52e4748fefdbc9e2546bfd46466751d0d731a9/diff:/var/lib/docker/overlay2/a456a63f8e47b35152666b5bed78a907758cd348f3f920ffbb0d9311c9d279f9/diff:/var/lib/docker/overlay2/1c5e94ffa671b54b267cd527227dcfc39ed5bbab8e0fb6be2070ec431d507a0a/diff:/var/lib/docker/overlay2/8a3bd5d98c7659cf304678b6264744ec246cef9aee117fa1a540ff86a482ccc9/diff:/var/lib/docker/overlay2/9cad4076d4b4bbcef9e82592a57b400fe80d42ff1a19694877817439314cee0a/diff:/var/lib/docker/overlay2/7b472338287e29db62b353700eac813b73c885f86407cd11c41a1934299
e0863/diff:/var/lib/docker/overlay2/7354f50bc82cc9855195da76830d2458639d9e6287091849761c899619a2ac04/diff:/var/lib/docker/overlay2/8ab525fe3dfca3bc1d9268c9a3f943b207b867d96340df596abb469af4828ba6/diff:/var/lib/docker/overlay2/dffeea500d781c9d4c5cc65f1e1b6700cdb3a811012a3badaa2115639ffc0caf/diff:/var/lib/docker/overlay2/61a63133b63995518dd6210c5e78426401d4fc9f7d185b0aa89bbda3fc8c25b4/diff:/var/lib/docker/overlay2/e9e4eb2fce220904fdd41e59a5fa8987119588334028be497f829eef4be81f1c/diff:/var/lib/docker/overlay2/07a1057c0f65b9e87f72fa58023fbf90660450984d4fbc6f060ec063e9b08d45/diff:/var/lib/docker/overlay2/f2287ff314d014b75d8b3eb8116909dbed8fc8245f5279470da1b4ae6794135c/diff:/var/lib/docker/overlay2/b32153240a2094363680e20f20963644e66c17ce8ba073e6c2792e4b8a0b94e6/diff:/var/lib/docker/overlay2/bfaa3114ab06fc41c74eee314d6113b0126c1a54deea72eaeb994032c71a489a/diff:/var/lib/docker/overlay2/214d0a46ee53e937a5e0573eb214868d10db3a2af1260784531edbd04adcd3b9/diff:/var/lib/docker/overlay2/508066538d9756b99d4582d0658234a93882f9
326f08176861a8667ec795f2c2/diff:/var/lib/docker/overlay2/58e67638a3291768e9dbb2be901c6b5639959c7cc86f4e4bab8f2e639b50661c/diff:/var/lib/docker/overlay2/a4f5240c2f2f160632514b931abac3aed3b9488f5bc07990127c7e5c3e2fd9ab/diff",
"MergedDir": "/var/lib/docker/overlay2/0d592b07575af3ae0d650b98fdc151a01a60882b16ca6fbdf2b5ab602c6e88f5/merged",
"UpperDir": "/var/lib/docker/overlay2/0d592b07575af3ae0d650b98fdc151a01a60882b16ca6fbdf2b5ab602c6e88f5/diff",
"WorkDir": "/var/lib/docker/overlay2/0d592b07575af3ae0d650b98fdc151a01a60882b16ca6fbdf2b5ab602c6e88f5/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "multinode-052675",
"Source": "/var/lib/docker/volumes/multinode-052675/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "multinode-052675",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "multinode-052675",
"name.minikube.sigs.k8s.io": "multinode-052675",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5004b0f45c3461ea2e19628e37ba0c6ee2efb6fe7adbec99269076992ef3f002",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32852"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32851"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32848"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32850"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32849"
}
]
},
"SandboxKey": "/var/run/docker/netns/5004b0f45c34",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"multinode-052675": {
"IPAMConfig": {
"IPv4Address": "192.168.58.2"
},
"Links": null,
"Aliases": [
"314f7839c3ce",
"multinode-052675"
],
"NetworkID": "2c5d882139a100a36fc7907b7c297037e49f8b96a91c4fd0ce3c1e2733608fac",
"EndpointID": "01c0ded6d1575738961e5d1f4a19718c2ebbe1822adaac8b1a34e7271af18bba",
"Gateway": "192.168.58.1",
"IPAddress": "192.168.58.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:3a:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-052675 -n multinode-052675
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-052675 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-052675 logs -n 25: (1.11647299s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| cp | multinode-052675 cp multinode-052675:/home/docker/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m03:/home/docker/cp-test_multinode-052675_multinode-052675-m03.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-052675 ssh -n multinode-052675-m03 sudo cat | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | /home/docker/cp-test_multinode-052675_multinode-052675-m03.txt | | | | | |
| cp | multinode-052675 cp testdata/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m02:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | /tmp/TestMultiNodeserialCopyFile1635582165/001/cp-test_multinode-052675-m02.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675:/home/docker/cp-test_multinode-052675-m02_multinode-052675.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-052675 ssh -n multinode-052675 sudo cat | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | /home/docker/cp-test_multinode-052675-m02_multinode-052675.txt | | | | | |
| cp | multinode-052675 cp multinode-052675-m02:/home/docker/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m03:/home/docker/cp-test_multinode-052675-m02_multinode-052675-m03.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-052675 ssh -n multinode-052675-m03 sudo cat | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | /home/docker/cp-test_multinode-052675-m02_multinode-052675-m03.txt | | | | | |
| cp | multinode-052675 cp testdata/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | /tmp/TestMultiNodeserialCopyFile1635582165/001/cp-test_multinode-052675-m03.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675:/home/docker/cp-test_multinode-052675-m03_multinode-052675.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-052675 ssh -n multinode-052675 sudo cat | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | /home/docker/cp-test_multinode-052675-m03_multinode-052675.txt | | | | | |
| cp | multinode-052675 cp multinode-052675-m03:/home/docker/cp-test.txt | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m02:/home/docker/cp-test_multinode-052675-m03_multinode-052675-m02.txt | | | | | |
| ssh | multinode-052675 ssh -n | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | multinode-052675-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-052675 ssh -n multinode-052675-m02 sudo cat | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| | /home/docker/cp-test_multinode-052675-m03_multinode-052675-m02.txt | | | | | |
| node | multinode-052675 node stop m03 | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | 28 Jan 23 18:37 UTC |
| node | multinode-052675 node start | multinode-052675 | jenkins | v1.29.0 | 28 Jan 23 18:37 UTC | |
| | m03 --alsologtostderr | | | | | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/28 18:35:38
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.19.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0128 18:35:38.654513 121576 out.go:296] Setting OutFile to fd 1 ...
I0128 18:35:38.654636 121576 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 18:35:38.654644 121576 out.go:309] Setting ErrFile to fd 2...
I0128 18:35:38.654649 121576 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0128 18:35:38.654765 121576 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3259/.minikube/bin
I0128 18:35:38.656127 121576 out.go:303] Setting JSON to false
I0128 18:35:38.657615 121576 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":1091,"bootTime":1674929848,"procs":570,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0128 18:35:38.657686 121576 start.go:135] virtualization: kvm guest
I0128 18:35:38.660205 121576 out.go:177] * [multinode-052675] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0128 18:35:38.661669 121576 notify.go:220] Checking for updates...
I0128 18:35:38.663047 121576 out.go:177] - MINIKUBE_LOCATION=15565
I0128 18:35:38.664647 121576 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0128 18:35:38.666308 121576 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15565-3259/kubeconfig
I0128 18:35:38.667924 121576 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3259/.minikube
I0128 18:35:38.670441 121576 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0128 18:35:38.671951 121576 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0128 18:35:38.673782 121576 driver.go:365] Setting default libvirt URI to qemu:///system
I0128 18:35:38.701187 121576 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
I0128 18:35:38.701290 121576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0128 18:35:38.798534 121576 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-28 18:35:38.721493359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0128 18:35:38.798632 121576 docker.go:282] overlay module found
I0128 18:35:38.801215 121576 out.go:177] * Using the docker driver based on user configuration
I0128 18:35:38.802908 121576 start.go:296] selected driver: docker
I0128 18:35:38.802927 121576 start.go:857] validating driver "docker" against <nil>
I0128 18:35:38.802939 121576 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0128 18:35:38.803684 121576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0128 18:35:38.901084 121576 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-28 18:35:38.824010094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660674048 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0128 18:35:38.901218 121576 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0128 18:35:38.901395 121576 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0128 18:35:38.903806 121576 out.go:177] * Using Docker driver with root privileges
I0128 18:35:38.905605 121576 cni.go:84] Creating CNI manager for ""
I0128 18:35:38.905635 121576 cni.go:136] 0 nodes found, recommending kindnet
I0128 18:35:38.905645 121576 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
I0128 18:35:38.905656 121576 start_flags.go:319] config:
{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 18:35:38.907589 121576 out.go:177] * Starting control plane node multinode-052675 in cluster multinode-052675
I0128 18:35:38.909578 121576 cache.go:120] Beginning downloading kic base image for docker with docker
I0128 18:35:38.911485 121576 out.go:177] * Pulling base image ...
I0128 18:35:38.913505 121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0128 18:35:38.913569 121576 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0128 18:35:38.913579 121576 cache.go:57] Caching tarball of preloaded images
I0128 18:35:38.913628 121576 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
I0128 18:35:38.913656 121576 preload.go:174] Found /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0128 18:35:38.913665 121576 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0128 18:35:38.913987 121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:35:38.914007 121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json: {Name:mk32894770f2a18eadadbbeaddece988df6d749a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:35:38.936811 121576 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
I0128 18:35:38.936834 121576 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
I0128 18:35:38.936851 121576 cache.go:193] Successfully downloaded all kic artifacts
I0128 18:35:38.936894 121576 start.go:364] acquiring machines lock for multinode-052675: {Name:mk85ebbdb31f233e850f6772b4e0f5a60ad37b83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0128 18:35:38.937019 121576 start.go:368] acquired machines lock for "multinode-052675" in 89.778µs
I0128 18:35:38.937047 121576 start.go:93] Provisioning new machine with config: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0128 18:35:38.937141 121576 start.go:125] createHost starting for "" (driver="docker")
I0128 18:35:38.940342 121576 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0128 18:35:38.940558 121576 start.go:159] libmachine.API.Create for "multinode-052675" (driver="docker")
I0128 18:35:38.940581 121576 client.go:168] LocalClient.Create starting
I0128 18:35:38.940662 121576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem
I0128 18:35:38.940692 121576 main.go:141] libmachine: Decoding PEM data...
I0128 18:35:38.940709 121576 main.go:141] libmachine: Parsing certificate...
I0128 18:35:38.940759 121576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem
I0128 18:35:38.940772 121576 main.go:141] libmachine: Decoding PEM data...
I0128 18:35:38.940784 121576 main.go:141] libmachine: Parsing certificate...
I0128 18:35:38.941094 121576 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0128 18:35:38.962469 121576 cli_runner.go:211] docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0128 18:35:38.962531 121576 network_create.go:281] running [docker network inspect multinode-052675] to gather additional debugging logs...
I0128 18:35:38.962550 121576 cli_runner.go:164] Run: docker network inspect multinode-052675
W0128 18:35:38.985242 121576 cli_runner.go:211] docker network inspect multinode-052675 returned with exit code 1
I0128 18:35:38.985288 121576 network_create.go:284] error running [docker network inspect multinode-052675]: docker network inspect multinode-052675: exit status 1
stdout:
[]
stderr:
Error: No such network: multinode-052675
I0128 18:35:38.985306 121576 network_create.go:286] output of [docker network inspect multinode-052675]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: multinode-052675
** /stderr **
I0128 18:35:38.985366 121576 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0128 18:35:39.007351 121576 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5bbc83fbc3cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:58:2b:8e:8b} reservation:<nil>}
I0128 18:35:39.007838 121576 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e08de0}
I0128 18:35:39.007867 121576 network_create.go:123] attempt to create docker network multinode-052675 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0128 18:35:39.007937 121576 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-052675 multinode-052675
I0128 18:35:39.065261 121576 network_create.go:107] docker network multinode-052675 192.168.58.0/24 created
I0128 18:35:39.065290 121576 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-052675" container
I0128 18:35:39.065364 121576 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0128 18:35:39.086245 121576 cli_runner.go:164] Run: docker volume create multinode-052675 --label name.minikube.sigs.k8s.io=multinode-052675 --label created_by.minikube.sigs.k8s.io=true
I0128 18:35:39.108252 121576 oci.go:103] Successfully created a docker volume multinode-052675
I0128 18:35:39.108322 121576 cli_runner.go:164] Run: docker run --rm --name multinode-052675-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-052675 --entrypoint /usr/bin/test -v multinode-052675:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
I0128 18:35:39.664859 121576 oci.go:107] Successfully prepared a docker volume multinode-052675
I0128 18:35:39.664901 121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0128 18:35:39.664923 121576 kic.go:190] Starting extracting preloaded images to volume ...
I0128 18:35:39.664995 121576 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-052675:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
I0128 18:35:44.658237 121576 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-052675:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (4.993156975s)
I0128 18:35:44.658267 121576 kic.go:199] duration metric: took 4.993343 seconds to extract preloaded images to volume
W0128 18:35:44.658434 121576 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0128 18:35:44.658558 121576 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0128 18:35:44.757525 121576 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-052675 --name multinode-052675 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-052675 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-052675 --network multinode-052675 --ip 192.168.58.2 --volume multinode-052675:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
I0128 18:35:45.157931 121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Running}}
I0128 18:35:45.183752 121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:35:45.208132 121576 cli_runner.go:164] Run: docker exec multinode-052675 stat /var/lib/dpkg/alternatives/iptables
I0128 18:35:45.254420 121576 oci.go:144] the created container "multinode-052675" has a running status.
I0128 18:35:45.254456 121576 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa...
I0128 18:35:45.342834 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0128 18:35:45.342900 121576 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0128 18:35:45.406962 121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:35:45.430658 121576 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0128 18:35:45.430684 121576 kic_runner.go:114] Args: [docker exec --privileged multinode-052675 chown docker:docker /home/docker/.ssh/authorized_keys]
I0128 18:35:45.498575 121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:35:45.521124 121576 machine.go:88] provisioning docker machine ...
I0128 18:35:45.521169 121576 ubuntu.go:169] provisioning hostname "multinode-052675"
I0128 18:35:45.521226 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:45.548352 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:35:45.548618 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0128 18:35:45.548650 121576 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-052675 && echo "multinode-052675" | sudo tee /etc/hostname
I0128 18:35:45.549274 121576 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32780->127.0.0.1:32852: read: connection reset by peer
I0128 18:35:48.689431 121576 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-052675
I0128 18:35:48.689510 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:48.713097 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:35:48.713268 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0128 18:35:48.713286 121576 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-052675' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-052675/g' /etc/hosts;
else
echo '127.0.1.1 multinode-052675' | sudo tee -a /etc/hosts;
fi
fi
I0128 18:35:48.844487 121576 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0128 18:35:48.844514 121576 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3259/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3259/.minikube}
I0128 18:35:48.844536 121576 ubuntu.go:177] setting up certificates
I0128 18:35:48.844545 121576 provision.go:83] configureAuth start
I0128 18:35:48.844597 121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675
I0128 18:35:48.866662 121576 provision.go:138] copyHostCerts
I0128 18:35:48.866696 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
I0128 18:35:48.866728 121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem, removing ...
I0128 18:35:48.866739 121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
I0128 18:35:48.866810 121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem (1082 bytes)
I0128 18:35:48.866896 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
I0128 18:35:48.866919 121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem, removing ...
I0128 18:35:48.866927 121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
I0128 18:35:48.866958 121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem (1123 bytes)
I0128 18:35:48.867014 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
I0128 18:35:48.867034 121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem, removing ...
I0128 18:35:48.867040 121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
I0128 18:35:48.867071 121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem (1679 bytes)
I0128 18:35:48.867132 121576 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem org=jenkins.multinode-052675 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-052675]
I0128 18:35:49.179482 121576 provision.go:172] copyRemoteCerts
I0128 18:35:49.179549 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0128 18:35:49.179581 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:49.203934 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:35:49.295905 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0128 18:35:49.295979 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0128 18:35:49.313674 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0128 18:35:49.313728 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0128 18:35:49.330710 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem -> /etc/docker/server.pem
I0128 18:35:49.330760 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0128 18:35:49.347624 121576 provision.go:86] duration metric: configureAuth took 503.066444ms
I0128 18:35:49.347651 121576 ubuntu.go:193] setting minikube options for container-runtime
I0128 18:35:49.347805 121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:35:49.347850 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:49.370091 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:35:49.370273 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0128 18:35:49.370287 121576 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0128 18:35:49.500764 121576 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0128 18:35:49.500789 121576 ubuntu.go:71] root file system type: overlay
I0128 18:35:49.500982 121576 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0128 18:35:49.501044 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:49.525240 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:35:49.525391 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0128 18:35:49.525468 121576 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0128 18:35:49.664945 121576 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0128 18:35:49.665020 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:49.689142 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:35:49.689302 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0128 18:35:49.689385 121576 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0128 18:35:50.333243 121576 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-01-19 17:34:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-28 18:35:49.659262819 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0128 18:35:50.333271 121576 machine.go:91] provisioned docker machine in 4.8121178s
I0128 18:35:50.333289 121576 client.go:171] LocalClient.Create took 11.392703028s
I0128 18:35:50.333301 121576 start.go:167] duration metric: libmachine.API.Create for "multinode-052675" took 11.392742716s
I0128 18:35:50.333309 121576 start.go:300] post-start starting for "multinode-052675" (driver="docker")
I0128 18:35:50.333316 121576 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0128 18:35:50.333377 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0128 18:35:50.333416 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:50.357009 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:35:50.452278 121576 ssh_runner.go:195] Run: cat /etc/os-release
I0128 18:35:50.454884 121576 command_runner.go:130] > NAME="Ubuntu"
I0128 18:35:50.454899 121576 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
I0128 18:35:50.454903 121576 command_runner.go:130] > ID=ubuntu
I0128 18:35:50.454908 121576 command_runner.go:130] > ID_LIKE=debian
I0128 18:35:50.454913 121576 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
I0128 18:35:50.454917 121576 command_runner.go:130] > VERSION_ID="20.04"
I0128 18:35:50.454922 121576 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
I0128 18:35:50.454929 121576 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
I0128 18:35:50.454937 121576 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I0128 18:35:50.454952 121576 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I0128 18:35:50.454962 121576 command_runner.go:130] > VERSION_CODENAME=focal
I0128 18:35:50.454973 121576 command_runner.go:130] > UBUNTU_CODENAME=focal
I0128 18:35:50.455028 121576 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0128 18:35:50.455052 121576 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0128 18:35:50.455066 121576 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0128 18:35:50.455076 121576 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0128 18:35:50.455087 121576 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/addons for local assets ...
I0128 18:35:50.455134 121576 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/files for local assets ...
I0128 18:35:50.455209 121576 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> 103532.pem in /etc/ssl/certs
I0128 18:35:50.455219 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> /etc/ssl/certs/103532.pem
I0128 18:35:50.455302 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0128 18:35:50.462121 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:35:50.479679 121576 start.go:303] post-start completed in 146.357687ms
I0128 18:35:50.480033 121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675
I0128 18:35:50.502489 121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:35:50.502706 121576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0128 18:35:50.502742 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:50.524482 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:35:50.612747 121576 command_runner.go:130] > 16%!
(MISSING)I0128 18:35:50.612820 121576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0128 18:35:50.616387 121576 command_runner.go:130] > 247G
I0128 18:35:50.616425 121576 start.go:128] duration metric: createHost completed in 11.679275622s
I0128 18:35:50.616435 121576 start.go:83] releasing machines lock for "multinode-052675", held for 11.679402154s
I0128 18:35:50.616507 121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675
I0128 18:35:50.639067 121576 ssh_runner.go:195] Run: cat /version.json
I0128 18:35:50.639112 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:50.639125 121576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0128 18:35:50.639186 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:35:50.661638 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:35:50.662053 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:35:50.784721 121576 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0128 18:35:50.784814 121576 command_runner.go:130] > {"iso_version": "v1.29.0", "kicbase_version": "v0.0.37", "minikube_version": "v1.29.0", "commit": "69417d0c8c1a2f3e72a4e5999252066a50eceb1b"}
I0128 18:35:50.784952 121576 ssh_runner.go:195] Run: systemctl --version
I0128 18:35:50.788704 121576 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
I0128 18:35:50.788731 121576 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
I0128 18:35:50.788923 121576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0128 18:35:50.792422 121576 command_runner.go:130] > File: /etc/cni/net.d/200-loopback.conf
I0128 18:35:50.792459 121576 command_runner.go:130] > Size: 54 Blocks: 8 IO Block: 4096 regular file
I0128 18:35:50.792470 121576 command_runner.go:130] > Device: 34h/52d Inode: 568458 Links: 1
I0128 18:35:50.792480 121576 command_runner.go:130] > Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
I0128 18:35:50.792488 121576 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
I0128 18:35:50.792496 121576 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
I0128 18:35:50.792501 121576 command_runner.go:130] > Change: 2023-01-28 18:22:00.814355792 +0000
I0128 18:35:50.792507 121576 command_runner.go:130] > Birth: -
I0128 18:35:50.792693 121576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0128 18:35:50.811627 121576 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0128 18:35:50.811743 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0128 18:35:50.818047 121576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0128 18:35:50.830621 121576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0128 18:35:50.848722 121576 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf,
I0128 18:35:50.848775 121576 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0128 18:35:50.848789 121576 start.go:483] detecting cgroup driver to use...
I0128 18:35:50.848829 121576 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:35:50.848967 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:35:50.861347 121576 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0128 18:35:50.861375 121576 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0128 18:35:50.862091 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0128 18:35:50.870112 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0128 18:35:50.878302 121576 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0128 18:35:50.878350 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0128 18:35:50.886488 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:35:50.894963 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0128 18:35:50.903703 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:35:50.912977 121576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0128 18:35:50.921938 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0128 18:35:50.930610 121576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0128 18:35:50.937803 121576 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0128 18:35:50.937877 121576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0128 18:35:50.945261 121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:35:51.014930 121576 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0128 18:35:51.095722 121576 start.go:483] detecting cgroup driver to use...
I0128 18:35:51.095778 121576 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:35:51.095830 121576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0128 18:35:51.104894 121576 command_runner.go:130] > # /lib/systemd/system/docker.service
I0128 18:35:51.104939 121576 command_runner.go:130] > [Unit]
I0128 18:35:51.104949 121576 command_runner.go:130] > Description=Docker Application Container Engine
I0128 18:35:51.104957 121576 command_runner.go:130] > Documentation=https://docs.docker.com
I0128 18:35:51.104969 121576 command_runner.go:130] > BindsTo=containerd.service
I0128 18:35:51.104979 121576 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
I0128 18:35:51.104990 121576 command_runner.go:130] > Wants=network-online.target
I0128 18:35:51.105000 121576 command_runner.go:130] > Requires=docker.socket
I0128 18:35:51.105020 121576 command_runner.go:130] > StartLimitBurst=3
I0128 18:35:51.105031 121576 command_runner.go:130] > StartLimitIntervalSec=60
I0128 18:35:51.105040 121576 command_runner.go:130] > [Service]
I0128 18:35:51.105047 121576 command_runner.go:130] > Type=notify
I0128 18:35:51.105056 121576 command_runner.go:130] > Restart=on-failure
I0128 18:35:51.105074 121576 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0128 18:35:51.105090 121576 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0128 18:35:51.105105 121576 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0128 18:35:51.105119 121576 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0128 18:35:51.105133 121576 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0128 18:35:51.105147 121576 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0128 18:35:51.105161 121576 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0128 18:35:51.105193 121576 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0128 18:35:51.105208 121576 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0128 18:35:51.105218 121576 command_runner.go:130] > ExecStart=
I0128 18:35:51.105245 121576 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
I0128 18:35:51.105256 121576 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0128 18:35:51.105267 121576 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0128 18:35:51.105281 121576 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0128 18:35:51.105291 121576 command_runner.go:130] > LimitNOFILE=infinity
I0128 18:35:51.105299 121576 command_runner.go:130] > LimitNPROC=infinity
I0128 18:35:51.105307 121576 command_runner.go:130] > LimitCORE=infinity
I0128 18:35:51.105316 121576 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0128 18:35:51.105328 121576 command_runner.go:130] > # Only systemd 226 and above support this version.
I0128 18:35:51.105338 121576 command_runner.go:130] > TasksMax=infinity
I0128 18:35:51.105348 121576 command_runner.go:130] > TimeoutStartSec=0
I0128 18:35:51.105359 121576 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0128 18:35:51.105369 121576 command_runner.go:130] > Delegate=yes
I0128 18:35:51.105384 121576 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0128 18:35:51.105394 121576 command_runner.go:130] > KillMode=process
I0128 18:35:51.105408 121576 command_runner.go:130] > [Install]
I0128 18:35:51.105419 121576 command_runner.go:130] > WantedBy=multi-user.target
I0128 18:35:51.105748 121576 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0128 18:35:51.105815 121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0128 18:35:51.115947 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:35:51.129293 121576 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0128 18:35:51.129331 121576 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0128 18:35:51.129386 121576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0128 18:35:51.215555 121576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0128 18:35:51.313329 121576 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0128 18:35:51.313359 121576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0128 18:35:51.326876 121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:35:51.412468 121576 ssh_runner.go:195] Run: sudo systemctl restart docker
I0128 18:35:51.623365 121576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:35:51.703684 121576 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
I0128 18:35:51.703760 121576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0128 18:35:51.784951 121576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:35:51.860364 121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:35:51.935240 121576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0128 18:35:51.947654 121576 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0128 18:35:51.947725 121576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0128 18:35:51.950925 121576 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0128 18:35:51.950954 121576 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0128 18:35:51.950963 121576 command_runner.go:130] > Device: 3fh/63d Inode: 206 Links: 1
I0128 18:35:51.950973 121576 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 999/ docker)
I0128 18:35:51.950981 121576 command_runner.go:130] > Access: 2023-01-28 18:35:51.939485456 +0000
I0128 18:35:51.950990 121576 command_runner.go:130] > Modify: 2023-01-28 18:35:51.939485456 +0000
I0128 18:35:51.951001 121576 command_runner.go:130] > Change: 2023-01-28 18:35:51.943485846 +0000
I0128 18:35:51.951011 121576 command_runner.go:130] > Birth: -
I0128 18:35:51.951032 121576 start.go:551] Will wait 60s for crictl version
I0128 18:35:51.951072 121576 ssh_runner.go:195] Run: which crictl
I0128 18:35:51.953864 121576 command_runner.go:130] > /usr/bin/crictl
I0128 18:35:51.953985 121576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0128 18:35:52.048118 121576 command_runner.go:130] > Version: 0.1.0
I0128 18:35:52.048152 121576 command_runner.go:130] > RuntimeName: docker
I0128 18:35:52.048161 121576 command_runner.go:130] > RuntimeVersion: 20.10.23
I0128 18:35:52.048170 121576 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0128 18:35:52.049799 121576 start.go:567] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0128 18:35:52.049855 121576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:35:52.076368 121576 command_runner.go:130] > 20.10.23
I0128 18:35:52.077580 121576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:35:52.103757 121576 command_runner.go:130] > 20.10.23
I0128 18:35:52.106824 121576 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0128 18:35:52.106909 121576 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0128 18:35:52.128260 121576 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0128 18:35:52.131415 121576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:35:52.141376 121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0128 18:35:52.141440 121576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0128 18:35:52.163955 121576 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0128 18:35:52.163976 121576 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0128 18:35:52.163981 121576 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0128 18:35:52.163986 121576 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0128 18:35:52.163991 121576 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0128 18:35:52.163995 121576 command_runner.go:130] > registry.k8s.io/pause:3.9
I0128 18:35:52.163999 121576 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0128 18:35:52.164005 121576 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0128 18:35:52.164039 121576 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0128 18:35:52.164051 121576 docker.go:560] Images already preloaded, skipping extraction
I0128 18:35:52.164099 121576 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0128 18:35:52.184774 121576 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0128 18:35:52.184798 121576 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0128 18:35:52.184803 121576 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0128 18:35:52.184808 121576 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0128 18:35:52.184813 121576 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
I0128 18:35:52.184817 121576 command_runner.go:130] > registry.k8s.io/pause:3.9
I0128 18:35:52.184822 121576 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
I0128 18:35:52.184827 121576 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0128 18:35:52.185951 121576 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0128 18:35:52.185971 121576 cache_images.go:84] Images are preloaded, skipping loading
I0128 18:35:52.186027 121576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0128 18:35:52.252127 121576 command_runner.go:130] > cgroupfs
I0128 18:35:52.253494 121576 cni.go:84] Creating CNI manager for ""
I0128 18:35:52.253514 121576 cni.go:136] 1 nodes found, recommending kindnet
I0128 18:35:52.253524 121576 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0128 18:35:52.253547 121576 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-052675 NodeName:multinode-052675 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0128 18:35:52.253696 121576 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-052675"
kubeletExtraArgs:
node-ip: 192.168.58.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0128 18:35:52.253777 121576 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-052675 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0128 18:35:52.253825 121576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0128 18:35:52.261118 121576 command_runner.go:130] > kubeadm
I0128 18:35:52.261137 121576 command_runner.go:130] > kubectl
I0128 18:35:52.261143 121576 command_runner.go:130] > kubelet
I0128 18:35:52.261163 121576 binaries.go:44] Found k8s binaries, skipping transfer
I0128 18:35:52.261202 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0128 18:35:52.268092 121576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
I0128 18:35:52.281332 121576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0128 18:35:52.295184 121576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
I0128 18:35:52.310668 121576 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0128 18:35:52.314358 121576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:35:52.324913 121576 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675 for IP: 192.168.58.2
I0128 18:35:52.324941 121576 certs.go:186] acquiring lock for shared ca certs: {Name:mk283707adcbf18cf93dab5399aa9ec0bae25e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:35:52.325084 121576 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key
I0128 18:35:52.325256 121576 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key
I0128 18:35:52.325324 121576 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key
I0128 18:35:52.325339 121576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt with IP's: []
I0128 18:35:52.389976 121576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt ...
I0128 18:35:52.390011 121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt: {Name:mk6256c6f690324ccb025cd062c097c1548edb6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:35:52.390192 121576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key ...
I0128 18:35:52.390204 121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key: {Name:mkc615be81182c8600f095a5a9816bfa6149b5c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:35:52.390277 121576 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key.cee25041
I0128 18:35:52.390291 121576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0128 18:35:52.635045 121576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt.cee25041 ...
I0128 18:35:52.635089 121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt.cee25041: {Name:mk4fb8d5a64eb7055553ed41478812e02920018d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:35:52.635275 121576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key.cee25041 ...
I0128 18:35:52.635288 121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key.cee25041: {Name:mkb1fe31e630d1e60dd66b937af776be924593f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:35:52.635357 121576 certs.go:333] copying /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt
I0128 18:35:52.635431 121576 certs.go:337] copying /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key
I0128 18:35:52.635478 121576 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key
I0128 18:35:52.635491 121576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt with IP's: []
I0128 18:35:52.728534 121576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt ...
I0128 18:35:52.728567 121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt: {Name:mkc80942dc65c06a9b7de9d77a7e11e3b4f4a219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:35:52.728726 121576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key ...
I0128 18:35:52.728737 121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key: {Name:mk74652b80055550da239fc7fbdd53f6c1af5c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:35:52.728799 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0128 18:35:52.728814 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0128 18:35:52.728822 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0128 18:35:52.728835 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0128 18:35:52.728847 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0128 18:35:52.728855 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0128 18:35:52.728865 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0128 18:35:52.728874 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0128 18:35:52.728920 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem (1338 bytes)
W0128 18:35:52.728952 121576 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353_empty.pem, impossibly tiny 0 bytes
I0128 18:35:52.728963 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem (1675 bytes)
I0128 18:35:52.729028 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem (1082 bytes)
I0128 18:35:52.729054 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem (1123 bytes)
I0128 18:35:52.729075 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem (1679 bytes)
I0128 18:35:52.729110 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:35:52.729144 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0128 18:35:52.729157 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem -> /usr/share/ca-certificates/10353.pem
I0128 18:35:52.729169 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> /usr/share/ca-certificates/103532.pem
I0128 18:35:52.729692 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0128 18:35:52.747915 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0128 18:35:52.764814 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0128 18:35:52.781666 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0128 18:35:52.798841 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0128 18:35:52.817117 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0128 18:35:52.834165 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0128 18:35:52.851823 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0128 18:35:52.870078 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0128 18:35:52.887564 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem --> /usr/share/ca-certificates/10353.pem (1338 bytes)
I0128 18:35:52.905470 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /usr/share/ca-certificates/103532.pem (1708 bytes)
I0128 18:35:52.923781 121576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0128 18:35:52.936202 121576 ssh_runner.go:195] Run: openssl version
I0128 18:35:52.940671 121576 command_runner.go:130] > OpenSSL 1.1.1f 31 Mar 2020
I0128 18:35:52.940812 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0128 18:35:52.947706 121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0128 18:35:52.950746 121576 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
I0128 18:35:52.950783 121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
I0128 18:35:52.950825 121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0128 18:35:52.955595 121576 command_runner.go:130] > b5213941
I0128 18:35:52.955773 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0128 18:35:52.963292 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10353.pem && ln -fs /usr/share/ca-certificates/10353.pem /etc/ssl/certs/10353.pem"
I0128 18:35:52.971323 121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10353.pem
I0128 18:35:52.974585 121576 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
I0128 18:35:52.974634 121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
I0128 18:35:52.974692 121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10353.pem
I0128 18:35:52.979450 121576 command_runner.go:130] > 51391683
I0128 18:35:52.979664 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10353.pem /etc/ssl/certs/51391683.0"
I0128 18:35:52.987069 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103532.pem && ln -fs /usr/share/ca-certificates/103532.pem /etc/ssl/certs/103532.pem"
I0128 18:35:52.994920 121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103532.pem
I0128 18:35:52.998354 121576 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
I0128 18:35:52.998393 121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
I0128 18:35:52.998443 121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103532.pem
I0128 18:35:53.003371 121576 command_runner.go:130] > 3ec20f2e
I0128 18:35:53.003436 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103532.pem /etc/ssl/certs/3ec20f2e.0"
I0128 18:35:53.011362 121576 kubeadm.go:401] StartCluster: {Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 18:35:53.011488 121576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0128 18:35:53.032670 121576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0128 18:35:53.039074 121576 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
I0128 18:35:53.039103 121576 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
I0128 18:35:53.039111 121576 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
I0128 18:35:53.039655 121576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0128 18:35:53.046335 121576 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0128 18:35:53.046389 121576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0128 18:35:53.053971 121576 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0128 18:35:53.054001 121576 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0128 18:35:53.054012 121576 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0128 18:35:53.054025 121576 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0128 18:35:53.054068 121576 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0128 18:35:53.054117 121576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0128 18:35:53.102905 121576 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
I0128 18:35:53.102926 121576 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
I0128 18:35:53.102985 121576 kubeadm.go:322] [preflight] Running pre-flight checks
I0128 18:35:53.103000 121576 command_runner.go:130] > [preflight] Running pre-flight checks
I0128 18:35:53.137013 121576 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
I0128 18:35:53.137044 121576 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
I0128 18:35:53.137102 121576 kubeadm.go:322] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
I0128 18:35:53.137132 121576 command_runner.go:130] > [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
I0128 18:35:53.137196 121576 kubeadm.go:322] [0;37mOS[0m: [0;32mLinux[0m
I0128 18:35:53.137210 121576 command_runner.go:130] > [0;37mOS[0m: [0;32mLinux[0m
I0128 18:35:53.137262 121576 kubeadm.go:322] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0128 18:35:53.137273 121576 command_runner.go:130] > [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0128 18:35:53.137327 121576 kubeadm.go:322] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0128 18:35:53.137338 121576 command_runner.go:130] > [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0128 18:35:53.137403 121576 kubeadm.go:322] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0128 18:35:53.137426 121576 command_runner.go:130] > [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0128 18:35:53.137491 121576 kubeadm.go:322] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0128 18:35:53.137504 121576 command_runner.go:130] > [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0128 18:35:53.137544 121576 kubeadm.go:322] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0128 18:35:53.137552 121576 command_runner.go:130] > [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0128 18:35:53.137609 121576 kubeadm.go:322] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0128 18:35:53.137615 121576 command_runner.go:130] > [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0128 18:35:53.137650 121576 kubeadm.go:322] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0128 18:35:53.137657 121576 command_runner.go:130] > [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0128 18:35:53.137703 121576 kubeadm.go:322] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0128 18:35:53.137710 121576 command_runner.go:130] > [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0128 18:35:53.137750 121576 kubeadm.go:322] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0128 18:35:53.137759 121576 command_runner.go:130] > [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0128 18:35:53.203276 121576 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0128 18:35:53.203306 121576 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
I0128 18:35:53.203406 121576 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0128 18:35:53.203417 121576 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
I0128 18:35:53.203522 121576 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0128 18:35:53.203534 121576 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0128 18:35:53.332965 121576 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0128 18:35:53.337524 121576 out.go:204] - Generating certificates and keys ...
I0128 18:35:53.333027 121576 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0128 18:35:53.337694 121576 kubeadm.go:322] [certs] Using existing ca certificate authority
I0128 18:35:53.337731 121576 command_runner.go:130] > [certs] Using existing ca certificate authority
I0128 18:35:53.337808 121576 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0128 18:35:53.337818 121576 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0128 18:35:53.436658 121576 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0128 18:35:53.436700 121576 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
I0128 18:35:53.555142 121576 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0128 18:35:53.555193 121576 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
I0128 18:35:53.614846 121576 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0128 18:35:53.614867 121576 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
I0128 18:35:53.764645 121576 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0128 18:35:53.764664 121576 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
I0128 18:35:53.865611 121576 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0128 18:35:53.865638 121576 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
I0128 18:35:53.865792 121576 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-052675] and IPs [192.168.58.2 127.0.0.1 ::1]
I0128 18:35:53.865804 121576 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-052675] and IPs [192.168.58.2 127.0.0.1 ::1]
I0128 18:35:54.098510 121576 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0128 18:35:54.098541 121576 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
I0128 18:35:54.098638 121576 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-052675] and IPs [192.168.58.2 127.0.0.1 ::1]
I0128 18:35:54.098666 121576 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-052675] and IPs [192.168.58.2 127.0.0.1 ::1]
I0128 18:35:54.273367 121576 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0128 18:35:54.273400 121576 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
I0128 18:35:54.338008 121576 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0128 18:35:54.338047 121576 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
I0128 18:35:54.568654 121576 kubeadm.go:322] [certs] Generating "sa" key and public key
I0128 18:35:54.568684 121576 command_runner.go:130] > [certs] Generating "sa" key and public key
I0128 18:35:54.568754 121576 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0128 18:35:54.568766 121576 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0128 18:35:54.859049 121576 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0128 18:35:54.859082 121576 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0128 18:35:55.019596 121576 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0128 18:35:55.019624 121576 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0128 18:35:55.307403 121576 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0128 18:35:55.307436 121576 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0128 18:35:55.396701 121576 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0128 18:35:55.396736 121576 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0128 18:35:55.408922 121576 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0128 18:35:55.408952 121576 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0128 18:35:55.409634 121576 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0128 18:35:55.409653 121576 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0128 18:35:55.409684 121576 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0128 18:35:55.409694 121576 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0128 18:35:55.498346 121576 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0128 18:35:55.498376 121576 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0128 18:35:55.501290 121576 out.go:204] - Booting up control plane ...
I0128 18:35:55.501463 121576 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0128 18:35:55.501484 121576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0128 18:35:55.501682 121576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0128 18:35:55.501702 121576 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0128 18:35:55.503209 121576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0128 18:35:55.503229 121576 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0128 18:35:55.503919 121576 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0128 18:35:55.503937 121576 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0128 18:35:55.505659 121576 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0128 18:35:55.505687 121576 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0128 18:36:04.507982 121576 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002280 seconds
I0128 18:36:04.508013 121576 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.002280 seconds
I0128 18:36:04.508136 121576 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0128 18:36:04.508148 121576 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0128 18:36:04.522271 121576 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0128 18:36:04.522287 121576 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0128 18:36:05.044961 121576 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0128 18:36:05.044984 121576 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
I0128 18:36:05.045206 121576 kubeadm.go:322] [mark-control-plane] Marking the node multinode-052675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0128 18:36:05.045216 121576 command_runner.go:130] > [mark-control-plane] Marking the node multinode-052675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0128 18:36:05.554288 121576 kubeadm.go:322] [bootstrap-token] Using token: dmigo5.p3ot3922dtqo17e1
I0128 18:36:05.554315 121576 command_runner.go:130] > [bootstrap-token] Using token: dmigo5.p3ot3922dtqo17e1
I0128 18:36:05.556145 121576 out.go:204] - Configuring RBAC rules ...
I0128 18:36:05.556245 121576 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0128 18:36:05.556258 121576 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0128 18:36:05.558930 121576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0128 18:36:05.558949 121576 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0128 18:36:05.565152 121576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0128 18:36:05.565172 121576 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0128 18:36:05.567895 121576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0128 18:36:05.567916 121576 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0128 18:36:05.571627 121576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0128 18:36:05.571652 121576 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0128 18:36:05.573993 121576 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0128 18:36:05.574013 121576 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0128 18:36:05.583681 121576 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0128 18:36:05.583702 121576 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0128 18:36:05.785655 121576 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0128 18:36:05.785702 121576 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0128 18:36:05.977070 121576 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0128 18:36:05.977094 121576 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0128 18:36:05.978356 121576 kubeadm.go:322]
I0128 18:36:05.978435 121576 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0128 18:36:05.978448 121576 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
I0128 18:36:05.978457 121576 kubeadm.go:322]
I0128 18:36:05.978540 121576 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0128 18:36:05.978550 121576 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
I0128 18:36:05.978557 121576 kubeadm.go:322]
I0128 18:36:05.978584 121576 kubeadm.go:322] mkdir -p $HOME/.kube
I0128 18:36:05.978593 121576 command_runner.go:130] > mkdir -p $HOME/.kube
I0128 18:36:05.978669 121576 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0128 18:36:05.978678 121576 command_runner.go:130] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0128 18:36:05.978794 121576 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0128 18:36:05.978815 121576 command_runner.go:130] > sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0128 18:36:05.978823 121576 kubeadm.go:322]
I0128 18:36:05.978900 121576 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0128 18:36:05.978914 121576 command_runner.go:130] > Alternatively, if you are the root user, you can run:
I0128 18:36:05.978944 121576 kubeadm.go:322]
I0128 18:36:05.979011 121576 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0128 18:36:05.979020 121576 command_runner.go:130] > export KUBECONFIG=/etc/kubernetes/admin.conf
I0128 18:36:05.979025 121576 kubeadm.go:322]
I0128 18:36:05.979101 121576 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0128 18:36:05.979115 121576 command_runner.go:130] > You should now deploy a pod network to the cluster.
I0128 18:36:05.979232 121576 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0128 18:36:05.979243 121576 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0128 18:36:05.979332 121576 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0128 18:36:05.979343 121576 command_runner.go:130] > https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0128 18:36:05.979348 121576 kubeadm.go:322]
I0128 18:36:05.979504 121576 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0128 18:36:05.979524 121576 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
I0128 18:36:05.979629 121576 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0128 18:36:05.979647 121576 command_runner.go:130] > and service account keys on each node and then running the following as root:
I0128 18:36:05.979674 121576 kubeadm.go:322]
I0128 18:36:05.979781 121576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dmigo5.p3ot3922dtqo17e1 \
I0128 18:36:05.979798 121576 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dmigo5.p3ot3922dtqo17e1 \
I0128 18:36:05.979944 121576 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc \
I0128 18:36:05.979964 121576 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc \
I0128 18:36:05.980015 121576 kubeadm.go:322] --control-plane
I0128 18:36:05.980026 121576 command_runner.go:130] > --control-plane
I0128 18:36:05.980031 121576 kubeadm.go:322]
I0128 18:36:05.980141 121576 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0128 18:36:05.980153 121576 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
I0128 18:36:05.980163 121576 kubeadm.go:322]
I0128 18:36:05.980275 121576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dmigo5.p3ot3922dtqo17e1 \
I0128 18:36:05.980286 121576 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dmigo5.p3ot3922dtqo17e1 \
I0128 18:36:05.980414 121576 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc
I0128 18:36:05.980424 121576 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc
I0128 18:36:05.982695 121576 kubeadm.go:322] W0128 18:35:53.095074 1415 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0128 18:36:05.982718 121576 command_runner.go:130] ! W0128 18:35:53.095074 1415 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0128 18:36:05.983025 121576 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
I0128 18:36:05.983038 121576 command_runner.go:130] ! [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
I0128 18:36:05.983191 121576 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0128 18:36:05.983206 121576 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0128 18:36:05.983229 121576 cni.go:84] Creating CNI manager for ""
I0128 18:36:05.983250 121576 cni.go:136] 1 nodes found, recommending kindnet
I0128 18:36:05.985999 121576 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0128 18:36:05.987973 121576 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0128 18:36:05.992528 121576 command_runner.go:130] > File: /opt/cni/bin/portmap
I0128 18:36:05.992554 121576 command_runner.go:130] > Size: 2828728 Blocks: 5528 IO Block: 4096 regular file
I0128 18:36:05.992569 121576 command_runner.go:130] > Device: 34h/52d Inode: 566552 Links: 1
I0128 18:36:05.992580 121576 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0128 18:36:05.992588 121576 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
I0128 18:36:05.992595 121576 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
I0128 18:36:05.992609 121576 command_runner.go:130] > Change: 2023-01-28 18:22:00.070283151 +0000
I0128 18:36:05.992615 121576 command_runner.go:130] > Birth: -
I0128 18:36:05.993033 121576 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
I0128 18:36:05.993054 121576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0128 18:36:06.008945 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0128 18:36:06.830005 121576 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
I0128 18:36:06.834458 121576 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
I0128 18:36:06.843053 121576 command_runner.go:130] > serviceaccount/kindnet created
I0128 18:36:06.851889 121576 command_runner.go:130] > daemonset.apps/kindnet created
I0128 18:36:06.855359 121576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0128 18:36:06.855487 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=0b7a59349a2d83a39298292bdec73f3c39ac1090 minikube.k8s.io/name=multinode-052675 minikube.k8s.io/updated_at=2023_01_28T18_36_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:06.855493 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:06.862506 121576 command_runner.go:130] > -16
I0128 18:36:06.862544 121576 ops.go:34] apiserver oom_adj: -16
I0128 18:36:06.936227 121576 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
I0128 18:36:06.936325 121576 command_runner.go:130] > node/multinode-052675 labeled
I0128 18:36:06.936329 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:07.021906 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:07.525260 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:07.585967 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:08.025624 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:08.088028 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:08.525449 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:08.588384 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:09.025163 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:09.089307 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:09.525671 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:09.586423 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:10.025589 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:10.086812 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:10.525471 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:10.591071 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:11.025662 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:11.089860 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:11.525539 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:11.591411 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:12.025705 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:12.089965 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:12.525267 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:12.587263 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:13.025436 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:13.090908 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:13.525460 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:13.588673 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:14.024714 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:14.085993 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:14.525437 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:14.587408 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:15.025473 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:15.092275 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:15.525703 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:15.587568 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:16.025664 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:16.090444 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:16.525697 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:16.594451 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:17.025282 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:17.093459 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:17.525689 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:17.591737 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:18.025417 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:18.090100 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:18.525259 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:18.597454 121576 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0128 18:36:19.025454 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0128 18:36:19.088025 121576 command_runner.go:130] > NAME SECRETS AGE
I0128 18:36:19.088049 121576 command_runner.go:130] > default 0 1s
I0128 18:36:19.090329 121576 kubeadm.go:1073] duration metric: took 12.234901667s to wait for elevateKubeSystemPrivileges.
I0128 18:36:19.090354 121576 kubeadm.go:403] StartCluster complete in 26.079003926s
I0128 18:36:19.090376 121576 settings.go:142] acquiring lock: {Name:mkdfcfb1354fd39bc122921aea86af6bfa22083f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:36:19.090447 121576 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15565-3259/kubeconfig
I0128 18:36:19.091307 121576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3259/kubeconfig: {Name:mkc492e51eda742b57c4c864f32d664b28db65ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:36:19.091555 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0128 18:36:19.091611 121576 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0128 18:36:19.091701 121576 addons.go:65] Setting storage-provisioner=true in profile "multinode-052675"
I0128 18:36:19.091723 121576 addons.go:227] Setting addon storage-provisioner=true in "multinode-052675"
I0128 18:36:19.091725 121576 addons.go:65] Setting default-storageclass=true in profile "multinode-052675"
W0128 18:36:19.091731 121576 addons.go:236] addon storage-provisioner should already be in state true
I0128 18:36:19.091745 121576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-052675"
I0128 18:36:19.091766 121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:36:19.091776 121576 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:36:19.091902 121576 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3259/kubeconfig
I0128 18:36:19.092149 121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:36:19.092327 121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:36:19.092250 121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0128 18:36:19.092874 121576 cert_rotation.go:137] Starting client certificate rotation controller
I0128 18:36:19.093061 121576 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0128 18:36:19.093082 121576 round_trippers.go:469] Request Headers:
I0128 18:36:19.093094 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:19.093109 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:19.102612 121576 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0128 18:36:19.102636 121576 round_trippers.go:577] Response Headers:
I0128 18:36:19.102645 121576 round_trippers.go:580] Audit-Id: 73b9783f-56c9-4068-85f6-6dc140b4a104
I0128 18:36:19.102652 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:19.102660 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:19.102669 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:19.102676 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:19.102685 121576 round_trippers.go:580] Content-Length: 291
I0128 18:36:19.102699 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:19 GMT
I0128 18:36:19.107168 121576 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"352","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0128 18:36:19.107691 121576 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"352","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0128 18:36:19.107753 121576 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0128 18:36:19.107760 121576 round_trippers.go:469] Request Headers:
I0128 18:36:19.107771 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:19.107781 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:19.107791 121576 round_trippers.go:473] Content-Type: application/json
I0128 18:36:19.114385 121576 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0128 18:36:19.114413 121576 round_trippers.go:577] Response Headers:
I0128 18:36:19.114424 121576 round_trippers.go:580] Audit-Id: 8651b65a-9418-4cf7-ba32-e83bcb0fccec
I0128 18:36:19.114434 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:19.114443 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:19.114453 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:19.114462 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:19.114471 121576 round_trippers.go:580] Content-Length: 291
I0128 18:36:19.114481 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:19 GMT
I0128 18:36:19.114519 121576 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"354","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0128 18:36:19.131907 121576 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0128 18:36:19.131049 121576 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3259/kubeconfig
I0128 18:36:19.134001 121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0128 18:36:19.134359 121576 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
I0128 18:36:19.134370 121576 round_trippers.go:469] Request Headers:
I0128 18:36:19.134382 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:19.134390 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:19.134862 121576 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0128 18:36:19.134882 121576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0128 18:36:19.134938 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:36:19.140364 121576 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0128 18:36:19.140385 121576 round_trippers.go:577] Response Headers:
I0128 18:36:19.140392 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:19.140398 121576 round_trippers.go:580] Content-Length: 109
I0128 18:36:19.140403 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:19 GMT
I0128 18:36:19.140409 121576 round_trippers.go:580] Audit-Id: d1edbb57-7224-4f28-bd0e-d402d0d16315
I0128 18:36:19.140414 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:19.140419 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:19.140425 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:19.140470 121576 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[]}
I0128 18:36:19.140708 121576 addons.go:227] Setting addon default-storageclass=true in "multinode-052675"
W0128 18:36:19.140725 121576 addons.go:236] addon default-storageclass should already be in state true
I0128 18:36:19.140749 121576 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:36:19.141157 121576 cli_runner.go:164] Run: docker container inspect multinode-052675 --format={{.State.Status}}
I0128 18:36:19.161321 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:36:19.166045 121576 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0128 18:36:19.166070 121576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0128 18:36:19.166112 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:36:19.198255 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:36:19.212660 121576 command_runner.go:130] > apiVersion: v1
I0128 18:36:19.212685 121576 command_runner.go:130] > data:
I0128 18:36:19.212692 121576 command_runner.go:130] > Corefile: |
I0128 18:36:19.212698 121576 command_runner.go:130] > .:53 {
I0128 18:36:19.212704 121576 command_runner.go:130] > errors
I0128 18:36:19.212711 121576 command_runner.go:130] > health {
I0128 18:36:19.212719 121576 command_runner.go:130] > lameduck 5s
I0128 18:36:19.212724 121576 command_runner.go:130] > }
I0128 18:36:19.212731 121576 command_runner.go:130] > ready
I0128 18:36:19.212748 121576 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0128 18:36:19.212763 121576 command_runner.go:130] > pods insecure
I0128 18:36:19.212771 121576 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0128 18:36:19.212786 121576 command_runner.go:130] > ttl 30
I0128 18:36:19.212793 121576 command_runner.go:130] > }
I0128 18:36:19.212805 121576 command_runner.go:130] > prometheus :9153
I0128 18:36:19.212812 121576 command_runner.go:130] > forward . /etc/resolv.conf {
I0128 18:36:19.212820 121576 command_runner.go:130] > max_concurrent 1000
I0128 18:36:19.212829 121576 command_runner.go:130] > }
I0128 18:36:19.212836 121576 command_runner.go:130] > cache 30
I0128 18:36:19.212851 121576 command_runner.go:130] > loop
I0128 18:36:19.212863 121576 command_runner.go:130] > reload
I0128 18:36:19.212870 121576 command_runner.go:130] > loadbalance
I0128 18:36:19.212881 121576 command_runner.go:130] > }
I0128 18:36:19.212895 121576 command_runner.go:130] > kind: ConfigMap
I0128 18:36:19.212900 121576 command_runner.go:130] > metadata:
I0128 18:36:19.212915 121576 command_runner.go:130] > creationTimestamp: "2023-01-28T18:36:05Z"
I0128 18:36:19.212920 121576 command_runner.go:130] > name: coredns
I0128 18:36:19.212927 121576 command_runner.go:130] > namespace: kube-system
I0128 18:36:19.212936 121576 command_runner.go:130] > resourceVersion: "225"
I0128 18:36:19.212942 121576 command_runner.go:130] > uid: c7d533e7-b7aa-40ce-8e2b-4d63a9280357
I0128 18:36:19.215863 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0128 18:36:19.389599 121576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0128 18:36:19.491522 121576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0128 18:36:19.615097 121576 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0128 18:36:19.615121 121576 round_trippers.go:469] Request Headers:
I0128 18:36:19.615133 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:19.615140 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:19.673165 121576 round_trippers.go:574] Response Status: 200 OK in 58 milliseconds
I0128 18:36:19.673207 121576 round_trippers.go:577] Response Headers:
I0128 18:36:19.673223 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:19.673233 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:19.673241 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:19.673253 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:19.673261 121576 round_trippers.go:580] Content-Length: 291
I0128 18:36:19.673274 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:19 GMT
I0128 18:36:19.673288 121576 round_trippers.go:580] Audit-Id: 5342c519-c51a-4a2a-9190-7b1860ba99ed
I0128 18:36:19.673574 121576 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"363","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0128 18:36:19.673700 121576 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-052675" context rescaled to 1 replicas
I0128 18:36:19.673738 121576 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0128 18:36:19.676939 121576 out.go:177] * Verifying Kubernetes components...
I0128 18:36:19.678806 121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0128 18:36:20.091068 121576 command_runner.go:130] > configmap/coredns replaced
I0128 18:36:20.171480 121576 start.go:919] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
I0128 18:36:20.286525 121576 command_runner.go:130] > serviceaccount/storage-provisioner created
I0128 18:36:20.292836 121576 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
I0128 18:36:20.300664 121576 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0128 18:36:20.372491 121576 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0128 18:36:20.382122 121576 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
I0128 18:36:20.395945 121576 command_runner.go:130] > pod/storage-provisioner created
I0128 18:36:20.473154 121576 command_runner.go:130] > storageclass.storage.k8s.io/standard created
I0128 18:36:20.473194 121576 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.083565936s)
I0128 18:36:20.479223 121576 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3259/kubeconfig
I0128 18:36:20.479583 121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0128 18:36:20.482848 121576 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0128 18:36:20.479935 121576 node_ready.go:35] waiting up to 6m0s for node "multinode-052675" to be "Ready" ...
I0128 18:36:20.485256 121576 addons.go:492] enable addons completed in 1.393647901s: enabled=[storage-provisioner default-storageclass]
I0128 18:36:20.485319 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:20.485329 121576 round_trippers.go:469] Request Headers:
I0128 18:36:20.485339 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:20.485348 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:20.487592 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:20.487620 121576 round_trippers.go:577] Response Headers:
I0128 18:36:20.487632 121576 round_trippers.go:580] Audit-Id: 59af8370-55e0-4c55-a843-ca0685087a00
I0128 18:36:20.487644 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:20.487653 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:20.487694 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:20.487708 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:20.487718 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:20 GMT
I0128 18:36:20.487878 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:20.488625 121576 node_ready.go:49] node "multinode-052675" has status "Ready":"True"
I0128 18:36:20.488648 121576 node_ready.go:38] duration metric: took 3.412086ms waiting for node "multinode-052675" to be "Ready" ...
I0128 18:36:20.488660 121576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0128 18:36:20.488772 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0128 18:36:20.488797 121576 round_trippers.go:469] Request Headers:
I0128 18:36:20.488814 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:20.488830 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:20.494471 121576 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0128 18:36:20.494546 121576 round_trippers.go:577] Response Headers:
I0128 18:36:20.494652 121576 round_trippers.go:580] Audit-Id: 14f43031-1869-4c44-bb62-da29f2e6b736
I0128 18:36:20.494678 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:20.494698 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:20.494714 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:20.494729 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:20.494755 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:20 GMT
I0128 18:36:20.495436 121576 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"380"},"items":[{"metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 61789 chars]
I0128 18:36:20.499384 121576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-c28p8" in "kube-system" namespace to be "Ready" ...
I0128 18:36:20.499521 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:20.499545 121576 round_trippers.go:469] Request Headers:
I0128 18:36:20.499568 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:20.499586 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:20.501766 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:20.501830 121576 round_trippers.go:577] Response Headers:
I0128 18:36:20.501852 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:20.501871 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:20.501888 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:20.501904 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:20.501930 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:20 GMT
I0128 18:36:20.501949 121576 round_trippers.go:580] Audit-Id: de281ade-1d97-47da-8670-b13c796f312c
I0128 18:36:20.502089 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:20.502614 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:20.502650 121576 round_trippers.go:469] Request Headers:
I0128 18:36:20.502670 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:20.502698 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:20.504336 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:20.504379 121576 round_trippers.go:577] Response Headers:
I0128 18:36:20.504400 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:20.504417 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:20.504432 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:20.504536 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:20.504560 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:20 GMT
I0128 18:36:20.504576 121576 round_trippers.go:580] Audit-Id: 904bdb55-4534-4774-9ecd-3abe1939b745
I0128 18:36:20.505057 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:21.005826 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:21.005893 121576 round_trippers.go:469] Request Headers:
I0128 18:36:21.005908 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:21.005921 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:21.008729 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:21.008749 121576 round_trippers.go:577] Response Headers:
I0128 18:36:21.008756 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:21.008762 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:21 GMT
I0128 18:36:21.008770 121576 round_trippers.go:580] Audit-Id: 57996a8b-c1ca-4236-8781-9cf9133adb2c
I0128 18:36:21.008780 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:21.008788 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:21.008800 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:21.008924 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:21.009488 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:21.009504 121576 round_trippers.go:469] Request Headers:
I0128 18:36:21.009515 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:21.009531 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:21.011543 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:21.011563 121576 round_trippers.go:577] Response Headers:
I0128 18:36:21.011572 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:21.011581 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:21 GMT
I0128 18:36:21.011589 121576 round_trippers.go:580] Audit-Id: 7944fca9-6c52-4acf-9ab3-2c9471fe708a
I0128 18:36:21.011598 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:21.011612 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:21.011625 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:21.011773 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:21.506422 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:21.506447 121576 round_trippers.go:469] Request Headers:
I0128 18:36:21.506460 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:21.506470 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:21.508914 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:21.508939 121576 round_trippers.go:577] Response Headers:
I0128 18:36:21.508948 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:21.508956 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:21.508964 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:21 GMT
I0128 18:36:21.508973 121576 round_trippers.go:580] Audit-Id: b1ba3ea6-e7b4-45bd-b514-b9eaef7c0651
I0128 18:36:21.508985 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:21.508995 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:21.509107 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:21.509667 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:21.509684 121576 round_trippers.go:469] Request Headers:
I0128 18:36:21.509694 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:21.509703 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:21.511825 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:21.511852 121576 round_trippers.go:577] Response Headers:
I0128 18:36:21.511862 121576 round_trippers.go:580] Audit-Id: d177bbb4-e48e-48e3-8c77-f6b71df8af1b
I0128 18:36:21.511871 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:21.511884 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:21.511892 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:21.511907 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:21.511924 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:21 GMT
I0128 18:36:21.512045 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:22.006590 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:22.006613 121576 round_trippers.go:469] Request Headers:
I0128 18:36:22.006626 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:22.006637 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:22.009290 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:22.009322 121576 round_trippers.go:577] Response Headers:
I0128 18:36:22.009332 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:22.009340 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:22.009349 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:22.009362 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:22.009376 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:22 GMT
I0128 18:36:22.009384 121576 round_trippers.go:580] Audit-Id: 9b9ca020-818a-4534-b5a6-5ff4222bcb56
I0128 18:36:22.009547 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:22.010154 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:22.010164 121576 round_trippers.go:469] Request Headers:
I0128 18:36:22.010172 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:22.010179 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:22.012180 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:22.012199 121576 round_trippers.go:577] Response Headers:
I0128 18:36:22.012209 121576 round_trippers.go:580] Audit-Id: 09d1f9f8-c0e9-4f42-a841-0912451dfe30
I0128 18:36:22.012217 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:22.012226 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:22.012239 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:22.012258 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:22.012269 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:22 GMT
I0128 18:36:22.012420 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:22.505952 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:22.505972 121576 round_trippers.go:469] Request Headers:
I0128 18:36:22.505980 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:22.505986 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:22.508358 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:22.508381 121576 round_trippers.go:577] Response Headers:
I0128 18:36:22.508392 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:22.508401 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:22 GMT
I0128 18:36:22.508410 121576 round_trippers.go:580] Audit-Id: 237853df-f6ce-4a41-a0e0-b96d89f6beb8
I0128 18:36:22.508421 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:22.508434 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:22.508466 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:22.508582 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:22.509046 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:22.509063 121576 round_trippers.go:469] Request Headers:
I0128 18:36:22.509077 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:22.509087 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:22.510774 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:22.510794 121576 round_trippers.go:577] Response Headers:
I0128 18:36:22.510801 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:22 GMT
I0128 18:36:22.510807 121576 round_trippers.go:580] Audit-Id: 4e71e57d-d0d7-4dd2-8d24-5a72ccf8e22d
I0128 18:36:22.510812 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:22.510817 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:22.510821 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:22.510827 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:22.510965 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:22.511239 121576 pod_ready.go:102] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"False"
I0128 18:36:23.006648 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:23.006669 121576 round_trippers.go:469] Request Headers:
I0128 18:36:23.006680 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:23.006688 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:23.009022 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:23.009041 121576 round_trippers.go:577] Response Headers:
I0128 18:36:23.009050 121576 round_trippers.go:580] Audit-Id: 78466b3b-68c7-452c-a87a-a251ee64ab26
I0128 18:36:23.009057 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:23.009062 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:23.009073 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:23.009080 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:23.009095 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:23 GMT
I0128 18:36:23.009210 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:23.009758 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:23.009774 121576 round_trippers.go:469] Request Headers:
I0128 18:36:23.009786 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:23.009797 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:23.011660 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:23.011679 121576 round_trippers.go:577] Response Headers:
I0128 18:36:23.011689 121576 round_trippers.go:580] Audit-Id: 1c35a27e-2e84-4b0b-bf58-5f325f1d0fea
I0128 18:36:23.011697 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:23.011706 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:23.011716 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:23.011724 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:23.011735 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:23 GMT
I0128 18:36:23.011870 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:23.506473 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:23.506493 121576 round_trippers.go:469] Request Headers:
I0128 18:36:23.506502 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:23.506509 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:23.508547 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:23.508573 121576 round_trippers.go:577] Response Headers:
I0128 18:36:23.508583 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:23.508588 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:23.508594 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:23 GMT
I0128 18:36:23.508599 121576 round_trippers.go:580] Audit-Id: 301a49be-703a-4dc7-883f-cc8a8af40039
I0128 18:36:23.508604 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:23.508609 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:23.508706 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:23.509124 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:23.509137 121576 round_trippers.go:469] Request Headers:
I0128 18:36:23.509144 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:23.509152 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:23.510705 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:23.510722 121576 round_trippers.go:577] Response Headers:
I0128 18:36:23.510729 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:23 GMT
I0128 18:36:23.510735 121576 round_trippers.go:580] Audit-Id: f51559a8-6e53-4692-b4cb-9c608854cedb
I0128 18:36:23.510740 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:23.510746 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:23.510751 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:23.510765 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:23.510855 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:24.006555 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:24.006577 121576 round_trippers.go:469] Request Headers:
I0128 18:36:24.006586 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:24.006592 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:24.008864 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:24.008901 121576 round_trippers.go:577] Response Headers:
I0128 18:36:24.008912 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:24 GMT
I0128 18:36:24.008922 121576 round_trippers.go:580] Audit-Id: a67d4579-ee6d-41a4-83f3-9e7ab7a7c91c
I0128 18:36:24.008935 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:24.008940 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:24.008945 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:24.008953 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:24.009060 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:24.009492 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:24.009507 121576 round_trippers.go:469] Request Headers:
I0128 18:36:24.009514 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:24.009520 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:24.011222 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:24.011237 121576 round_trippers.go:577] Response Headers:
I0128 18:36:24.011253 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:24.011262 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:24.011274 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:24 GMT
I0128 18:36:24.011285 121576 round_trippers.go:580] Audit-Id: 2e409c88-9504-498f-957f-333576ddef2e
I0128 18:36:24.011293 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:24.011311 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:24.011442 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:24.505933 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:24.505953 121576 round_trippers.go:469] Request Headers:
I0128 18:36:24.505961 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:24.505968 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:24.508179 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:24.508199 121576 round_trippers.go:577] Response Headers:
I0128 18:36:24.508206 121576 round_trippers.go:580] Audit-Id: 2c465c20-9353-4c9d-bd27-8e35558c5c4c
I0128 18:36:24.508211 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:24.508217 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:24.508222 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:24.508227 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:24.508232 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:24 GMT
I0128 18:36:24.508320 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:24.508773 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:24.508787 121576 round_trippers.go:469] Request Headers:
I0128 18:36:24.508794 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:24.508800 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:24.510600 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:24.510623 121576 round_trippers.go:577] Response Headers:
I0128 18:36:24.510633 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:24 GMT
I0128 18:36:24.510642 121576 round_trippers.go:580] Audit-Id: 16b48d69-5451-4900-8f1f-45342a471b0d
I0128 18:36:24.510649 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:24.510656 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:24.510665 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:24.510673 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:24.510848 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:25.006403 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:25.006423 121576 round_trippers.go:469] Request Headers:
I0128 18:36:25.006432 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:25.006438 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:25.008809 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:25.008833 121576 round_trippers.go:577] Response Headers:
I0128 18:36:25.008842 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:25.008850 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:25.008860 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:25.008868 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:25.008878 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:25 GMT
I0128 18:36:25.008887 121576 round_trippers.go:580] Audit-Id: 859cdb7b-1891-40d3-9007-39b8a2fdeac4
I0128 18:36:25.008985 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:25.009446 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:25.009460 121576 round_trippers.go:469] Request Headers:
I0128 18:36:25.009469 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:25.009475 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:25.011141 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:25.011162 121576 round_trippers.go:577] Response Headers:
I0128 18:36:25.011173 121576 round_trippers.go:580] Audit-Id: 83044aba-ec10-4d7f-9cb2-b3e08bb5b6b5
I0128 18:36:25.011181 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:25.011190 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:25.011215 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:25.011228 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:25.011241 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:25 GMT
I0128 18:36:25.011360 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:25.011650 121576 pod_ready.go:102] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"False"
I0128 18:36:25.505953 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:25.505974 121576 round_trippers.go:469] Request Headers:
I0128 18:36:25.505984 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:25.505992 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:25.507810 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:25.507833 121576 round_trippers.go:577] Response Headers:
I0128 18:36:25.507843 121576 round_trippers.go:580] Audit-Id: fb6fa7d0-139a-4b68-8dce-2cca21119354
I0128 18:36:25.507852 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:25.507860 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:25.507868 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:25.507877 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:25.507888 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:25 GMT
I0128 18:36:25.507973 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:25.508525 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:25.508540 121576 round_trippers.go:469] Request Headers:
I0128 18:36:25.508551 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:25.508561 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:25.510090 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:25.510111 121576 round_trippers.go:577] Response Headers:
I0128 18:36:25.510120 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:25.510128 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:25.510136 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:25.510153 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:25.510166 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:25 GMT
I0128 18:36:25.510175 121576 round_trippers.go:580] Audit-Id: ba35f305-b51d-4d97-b71d-db0b26104244
I0128 18:36:25.510267 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:26.005850 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:26.005871 121576 round_trippers.go:469] Request Headers:
I0128 18:36:26.005879 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:26.005886 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:26.008372 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:26.008398 121576 round_trippers.go:577] Response Headers:
I0128 18:36:26.008409 121576 round_trippers.go:580] Audit-Id: 1d2f296f-ae24-497c-9071-36a734b290ab
I0128 18:36:26.008418 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:26.008427 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:26.008435 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:26.008468 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:26.008477 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:26 GMT
I0128 18:36:26.008582 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:26.009144 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:26.009161 121576 round_trippers.go:469] Request Headers:
I0128 18:36:26.009172 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:26.009182 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:26.010999 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:26.011017 121576 round_trippers.go:577] Response Headers:
I0128 18:36:26.011026 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:26 GMT
I0128 18:36:26.011035 121576 round_trippers.go:580] Audit-Id: 7b888b56-f68f-441a-a79f-975f54fc887e
I0128 18:36:26.011042 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:26.011051 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:26.011062 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:26.011071 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:26.011184 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"317","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:26.505770 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:26.505791 121576 round_trippers.go:469] Request Headers:
I0128 18:36:26.505799 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:26.505806 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:26.508062 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:26.508084 121576 round_trippers.go:577] Response Headers:
I0128 18:36:26.508094 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:26 GMT
I0128 18:36:26.508103 121576 round_trippers.go:580] Audit-Id: 9f5433c5-002d-4ee5-9446-a23c15744df8
I0128 18:36:26.508116 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:26.508125 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:26.508133 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:26.508142 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:26.508259 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:26.508772 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:26.508787 121576 round_trippers.go:469] Request Headers:
I0128 18:36:26.508794 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:26.508801 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:26.510406 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:26.510422 121576 round_trippers.go:577] Response Headers:
I0128 18:36:26.510429 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:26.510435 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:26 GMT
I0128 18:36:26.510439 121576 round_trippers.go:580] Audit-Id: 85a016d6-a471-4d45-8ed2-d9ce82741cf0
I0128 18:36:26.510444 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:26.510450 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:26.510455 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:26.510595 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:27.006282 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:27.006309 121576 round_trippers.go:469] Request Headers:
I0128 18:36:27.006321 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:27.006331 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:27.008631 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:27.008656 121576 round_trippers.go:577] Response Headers:
I0128 18:36:27.008665 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:27.008673 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:27 GMT
I0128 18:36:27.008686 121576 round_trippers.go:580] Audit-Id: 6bf7ed5a-6354-4190-9cd4-b7abc1f35099
I0128 18:36:27.008695 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:27.008703 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:27.008714 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:27.008833 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:27.009277 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:27.009293 121576 round_trippers.go:469] Request Headers:
I0128 18:36:27.009303 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:27.009312 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:27.011332 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:27.011350 121576 round_trippers.go:577] Response Headers:
I0128 18:36:27.011357 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:27.011362 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:27 GMT
I0128 18:36:27.011367 121576 round_trippers.go:580] Audit-Id: a86c03c5-e667-441e-9285-e5562334b3f3
I0128 18:36:27.011372 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:27.011378 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:27.011382 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:27.011474 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:27.011796 121576 pod_ready.go:102] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"False"
I0128 18:36:27.506116 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:27.506140 121576 round_trippers.go:469] Request Headers:
I0128 18:36:27.506152 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:27.506159 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:27.508289 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:27.508311 121576 round_trippers.go:577] Response Headers:
I0128 18:36:27.508321 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:27 GMT
I0128 18:36:27.508329 121576 round_trippers.go:580] Audit-Id: 78c8c6e3-0b9a-4c24-8e6a-42871028ccf5
I0128 18:36:27.508340 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:27.508349 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:27.508362 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:27.508373 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:27.508479 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:27.508959 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:27.508971 121576 round_trippers.go:469] Request Headers:
I0128 18:36:27.508978 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:27.508984 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:27.510675 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:27.510695 121576 round_trippers.go:577] Response Headers:
I0128 18:36:27.510704 121576 round_trippers.go:580] Audit-Id: 58b48ffc-a8c8-4c3e-85ae-cad6dfb979f7
I0128 18:36:27.510714 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:27.510721 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:27.510729 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:27.510737 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:27.510750 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:27 GMT
I0128 18:36:27.510875 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:28.006498 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:28.006520 121576 round_trippers.go:469] Request Headers:
I0128 18:36:28.006528 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:28.006535 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:28.008923 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:28.008951 121576 round_trippers.go:577] Response Headers:
I0128 18:36:28.008962 121576 round_trippers.go:580] Audit-Id: 738fd5ed-e1cd-439e-a03b-160681435fdc
I0128 18:36:28.008971 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:28.008980 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:28.008988 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:28.008995 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:28.009000 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:28 GMT
I0128 18:36:28.009116 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:28.009573 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:28.009587 121576 round_trippers.go:469] Request Headers:
I0128 18:36:28.009595 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:28.009601 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:28.011529 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:28.011556 121576 round_trippers.go:577] Response Headers:
I0128 18:36:28.011564 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:28 GMT
I0128 18:36:28.011570 121576 round_trippers.go:580] Audit-Id: efe64178-7bca-465a-bb4f-f16115865993
I0128 18:36:28.011576 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:28.011582 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:28.011587 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:28.011595 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:28.011786 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:28.506317 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:28.506344 121576 round_trippers.go:469] Request Headers:
I0128 18:36:28.506353 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:28.506359 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:28.508828 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:28.508846 121576 round_trippers.go:577] Response Headers:
I0128 18:36:28.508853 121576 round_trippers.go:580] Audit-Id: d2cc854c-9901-4df8-b06a-782e010d87d3
I0128 18:36:28.508859 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:28.508867 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:28.508872 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:28.508878 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:28.508883 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:28 GMT
I0128 18:36:28.508985 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"353","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0128 18:36:28.509466 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:28.509483 121576 round_trippers.go:469] Request Headers:
I0128 18:36:28.509490 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:28.509497 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:28.511720 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:28.511737 121576 round_trippers.go:577] Response Headers:
I0128 18:36:28.511744 121576 round_trippers.go:580] Audit-Id: c954e0e9-1b66-415e-9b6a-dbe595fd5ec0
I0128 18:36:28.511750 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:28.511762 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:28.511768 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:28.511773 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:28.511778 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:28 GMT
I0128 18:36:28.511913 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:29.005869 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:29.005890 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.005899 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.005906 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.008438 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:29.008492 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.008503 121576 round_trippers.go:580] Audit-Id: 49804a2f-46e7-43af-867e-2803cd5977e2
I0128 18:36:29.008521 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.008531 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.008537 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.008543 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.008548 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.008636 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5942 chars]
I0128 18:36:29.009187 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:29.009206 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.009216 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.009226 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.011163 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:29.011182 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.011188 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.011196 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.011207 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.011225 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.011238 121576 round_trippers.go:580] Audit-Id: e7c16605-b219-4a9c-ba9d-5b64ed13cf65
I0128 18:36:29.011250 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.011360 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:29.011717 121576 pod_ready.go:92] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:29.011739 121576 pod_ready.go:81] duration metric: took 8.512295381s waiting for pod "coredns-787d4945fb-c28p8" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.011754 121576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-nzbz8" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.011833 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-nzbz8
I0128 18:36:29.011846 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.011854 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.011862 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.013527 121576 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
I0128 18:36:29.013557 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.013567 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.013577 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.013584 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.013593 121576 round_trippers.go:580] Content-Length: 216
I0128 18:36:29.013598 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.013606 121576 round_trippers.go:580] Audit-Id: ce5a0787-cf79-48dc-a122-8dace6298f61
I0128 18:36:29.013611 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.013632 121576 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-nzbz8\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-nzbz8","kind":"pods"},"code":404}
I0128 18:36:29.013824 121576 pod_ready.go:97] error getting pod "coredns-787d4945fb-nzbz8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-nzbz8" not found
I0128 18:36:29.013841 121576 pod_ready.go:81] duration metric: took 2.078165ms waiting for pod "coredns-787d4945fb-nzbz8" in "kube-system" namespace to be "Ready" ...
E0128 18:36:29.013853 121576 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-nzbz8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-nzbz8" not found
I0128 18:36:29.013863 121576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.013918 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-052675
I0128 18:36:29.013928 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.013938 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.013953 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.015703 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:29.015726 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.015736 121576 round_trippers.go:580] Audit-Id: 8316fddf-a2b6-42f8-9b16-5bb1626a02b7
I0128 18:36:29.015745 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.015754 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.015763 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.015769 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.015775 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.015869 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-052675","namespace":"kube-system","uid":"cf8dcb5a-42b0-44a1-aa07-56a3a6c1ff1d","resourceVersion":"261","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11ebc72e731e7d22158ad52d97ae7480","kubernetes.io/config.mirror":"11ebc72e731e7d22158ad52d97ae7480","kubernetes.io/config.seen":"2023-01-28T18:36:05.844239404Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
I0128 18:36:29.016252 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:29.016265 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.016272 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.016278 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.018058 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:29.018079 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.018088 121576 round_trippers.go:580] Audit-Id: ab0a4211-b7c6-4a2d-822e-3d014c0640a7
I0128 18:36:29.018095 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.018103 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.018111 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.018119 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.018130 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.018216 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:29.018477 121576 pod_ready.go:92] pod "etcd-multinode-052675" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:29.018488 121576 pod_ready.go:81] duration metric: took 4.6155ms waiting for pod "etcd-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.018500 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.018542 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-052675
I0128 18:36:29.018549 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.018557 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.018563 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.020271 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:29.020288 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.020294 121576 round_trippers.go:580] Audit-Id: cd2847b9-c39b-4d07-98d5-a1a37c5a86b1
I0128 18:36:29.020299 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.020304 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.020309 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.020314 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.020320 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.020423 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-052675","namespace":"kube-system","uid":"c9b8edb5-77fc-4191-b470-8a73c76a3a73","resourceVersion":"291","creationTimestamp":"2023-01-28T18:36:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"67b267479ac4834e2613b5155d6d00dd","kubernetes.io/config.mirror":"67b267479ac4834e2613b5155d6d00dd","kubernetes.io/config.seen":"2023-01-28T18:35:55.862480624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
I0128 18:36:29.020827 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:29.020841 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.020848 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.020855 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.022491 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:29.022518 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.022529 121576 round_trippers.go:580] Audit-Id: 0c8c4e59-0305-496a-bcdb-8f4cc71feb5d
I0128 18:36:29.022543 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.022553 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.022563 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.022576 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.022589 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.022672 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:29.022927 121576 pod_ready.go:92] pod "kube-apiserver-multinode-052675" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:29.022940 121576 pod_ready.go:81] duration metric: took 4.433077ms waiting for pod "kube-apiserver-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.022949 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.022995 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-052675
I0128 18:36:29.023003 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.023010 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.023016 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.024647 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:29.024667 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.024676 121576 round_trippers.go:580] Audit-Id: c24b7303-0555-451b-a87f-cc6c3e5fd2a1
I0128 18:36:29.024685 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.024698 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.024710 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.024721 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.024731 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.024846 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-052675","namespace":"kube-system","uid":"6dd849f3-f4b3-4704-a3c5-671cb6a2350c","resourceVersion":"276","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"df8dfac1e7b7f039ea2eca812f9510dc","kubernetes.io/config.mirror":"df8dfac1e7b7f039ea2eca812f9510dc","kubernetes.io/config.seen":"2023-01-28T18:36:05.844267614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
I0128 18:36:29.025242 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:29.025255 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.025265 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.025275 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.026855 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:29.026876 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.026886 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.026892 121576 round_trippers.go:580] Audit-Id: b5811475-b4fb-4dce-b8f2-09d3bcc81b61
I0128 18:36:29.026897 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.026903 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.026914 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.026922 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.027002 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:29.027288 121576 pod_ready.go:92] pod "kube-controller-manager-multinode-052675" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:29.027299 121576 pod_ready.go:81] duration metric: took 4.344922ms waiting for pod "kube-controller-manager-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.027308 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hz5nz" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.027345 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hz5nz
I0128 18:36:29.027353 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.027359 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.027366 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.028780 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:29.028797 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.028806 121576 round_trippers.go:580] Audit-Id: e62d4c8d-ca71-4b31-862f-d8f7ddd58f52
I0128 18:36:29.028814 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.028822 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.028832 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.028845 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.028862 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.028940 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hz5nz","generateName":"kube-proxy-","namespace":"kube-system","uid":"85457440-94b9-4686-be3e-dc5b5cbc0fbb","resourceVersion":"390","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
I0128 18:36:29.206291 121576 request.go:622] Waited for 176.99155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:29.206357 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:29.206362 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.206369 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.206376 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.208602 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:29.208627 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.208638 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.208649 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.208657 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.208665 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.208681 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.208690 121576 round_trippers.go:580] Audit-Id: 5c861d99-7560-4414-a819-82d1a0c8b1f8
I0128 18:36:29.208796 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:29.209106 121576 pod_ready.go:92] pod "kube-proxy-hz5nz" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:29.209120 121576 pod_ready.go:81] duration metric: took 181.807231ms waiting for pod "kube-proxy-hz5nz" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.209128 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.406553 121576 request.go:622] Waited for 197.34467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-052675
I0128 18:36:29.406611 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-052675
I0128 18:36:29.406616 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.406624 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.406630 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.408808 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:29.408833 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.408843 121576 round_trippers.go:580] Audit-Id: 94ddde5b-db8d-4b6c-ab1b-189ebad0d69d
I0128 18:36:29.408853 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.408862 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.408871 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.408879 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.408892 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.409007 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-052675","namespace":"kube-system","uid":"b93c851a-ef3e-45a2-88b6-08bf615609f3","resourceVersion":"263","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d47615414c8bc24a9efcf31abc68d62c","kubernetes.io/config.mirror":"d47615414c8bc24a9efcf31abc68d62c","kubernetes.io/config.seen":"2023-01-28T18:36:05.844268554Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
I0128 18:36:29.606803 121576 request.go:622] Waited for 197.353306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:29.606851 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:29.606860 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.606868 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.606875 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.609109 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:29.609130 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.609137 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.609142 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.609150 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.609158 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.609166 121576 round_trippers.go:580] Audit-Id: 0010eadb-122e-4380-a1e2-1d20d4646c71
I0128 18:36:29.609173 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.609275 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5001 chars]
I0128 18:36:29.609571 121576 pod_ready.go:92] pod "kube-scheduler-multinode-052675" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:29.609584 121576 pod_ready.go:81] duration metric: took 400.450424ms waiting for pod "kube-scheduler-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:29.609594 121576 pod_ready.go:38] duration metric: took 9.120885659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0128 18:36:29.609611 121576 api_server.go:51] waiting for apiserver process to appear ...
I0128 18:36:29.609649 121576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0128 18:36:29.619199 121576 command_runner.go:130] > 2103
I0128 18:36:29.619944 121576 api_server.go:71] duration metric: took 9.946165762s to wait for apiserver process to appear ...
I0128 18:36:29.619966 121576 api_server.go:87] waiting for apiserver healthz status ...
I0128 18:36:29.619979 121576 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
I0128 18:36:29.624209 121576 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
ok
I0128 18:36:29.624264 121576 round_trippers.go:463] GET https://192.168.58.2:8443/version
I0128 18:36:29.624275 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.624287 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.624301 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.624982 121576 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0128 18:36:29.624999 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.625009 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.625016 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.625025 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.625033 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.625040 121576 round_trippers.go:580] Content-Length: 263
I0128 18:36:29.625047 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.625054 121576 round_trippers.go:580] Audit-Id: ced58f6f-f49c-472f-8174-66cd7431a080
I0128 18:36:29.625075 121576 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.1",
"gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
"gitTreeState": "clean",
"buildDate": "2023-01-18T15:51:25Z",
"goVersion": "go1.19.5",
"compiler": "gc",
"platform": "linux/amd64"
}
I0128 18:36:29.625171 121576 api_server.go:140] control plane version: v1.26.1
I0128 18:36:29.625185 121576 api_server.go:130] duration metric: took 5.211798ms to wait for apiserver health ...
I0128 18:36:29.625195 121576 system_pods.go:43] waiting for kube-system pods to appear ...
I0128 18:36:29.806582 121576 request.go:622] Waited for 181.319721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0128 18:36:29.806628 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0128 18:36:29.806634 121576 round_trippers.go:469] Request Headers:
I0128 18:36:29.806654 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:29.806682 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:29.809887 121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0128 18:36:29.809907 121576 round_trippers.go:577] Response Headers:
I0128 18:36:29.809915 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:29.809921 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:29.809927 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:29.809933 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:29.809938 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:29 GMT
I0128 18:36:29.809944 121576 round_trippers.go:580] Audit-Id: a777a3b2-022e-4eae-b8f2-44e8b190b09e
I0128 18:36:29.810438 121576 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54993 chars]
I0128 18:36:29.812177 121576 system_pods.go:59] 8 kube-system pods found
I0128 18:36:29.812197 121576 system_pods.go:61] "coredns-787d4945fb-c28p8" [d87aee89-96d2-4627-a7ec-00a4d69653aa] Running
I0128 18:36:29.812202 121576 system_pods.go:61] "etcd-multinode-052675" [cf8dcb5a-42b0-44a1-aa07-56a3a6c1ff1d] Running
I0128 18:36:29.812207 121576 system_pods.go:61] "kindnet-8pkk5" [195e6421-dfdc-4781-bf15-3aa74552b4f8] Running
I0128 18:36:29.812212 121576 system_pods.go:61] "kube-apiserver-multinode-052675" [c9b8edb5-77fc-4191-b470-8a73c76a3a73] Running
I0128 18:36:29.812217 121576 system_pods.go:61] "kube-controller-manager-multinode-052675" [6dd849f3-f4b3-4704-a3c5-671cb6a2350c] Running
I0128 18:36:29.812225 121576 system_pods.go:61] "kube-proxy-hz5nz" [85457440-94b9-4686-be3e-dc5b5cbc0fbb] Running
I0128 18:36:29.812231 121576 system_pods.go:61] "kube-scheduler-multinode-052675" [b93c851a-ef3e-45a2-88b6-08bf615609f3] Running
I0128 18:36:29.812237 121576 system_pods.go:61] "storage-provisioner" [c317fca6-6da2-4fa0-9db8-6caf19aebf98] Running
I0128 18:36:29.812242 121576 system_pods.go:74] duration metric: took 187.042112ms to wait for pod list to return data ...
I0128 18:36:29.812252 121576 default_sa.go:34] waiting for default service account to be created ...
I0128 18:36:30.006534 121576 request.go:622] Waited for 194.214569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
I0128 18:36:30.006619 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
I0128 18:36:30.006627 121576 round_trippers.go:469] Request Headers:
I0128 18:36:30.006639 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:30.006650 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:30.008947 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:30.008969 121576 round_trippers.go:577] Response Headers:
I0128 18:36:30.008979 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:30.008987 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:30.008995 121576 round_trippers.go:580] Content-Length: 261
I0128 18:36:30.009004 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:30 GMT
I0128 18:36:30.009017 121576 round_trippers.go:580] Audit-Id: 105177b7-80c4-47ba-80d3-14b9590892be
I0128 18:36:30.009029 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:30.009042 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:30.009073 121576 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0deb750e-d81d-409e-bde7-902fc8bf838b","resourceVersion":"336","creationTimestamp":"2023-01-28T18:36:18Z"}}]}
I0128 18:36:30.009248 121576 default_sa.go:45] found service account: "default"
I0128 18:36:30.009261 121576 default_sa.go:55] duration metric: took 197.00171ms for default service account to be created ...
I0128 18:36:30.009270 121576 system_pods.go:116] waiting for k8s-apps to be running ...
I0128 18:36:30.206720 121576 request.go:622] Waited for 197.378995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0128 18:36:30.206796 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0128 18:36:30.206805 121576 round_trippers.go:469] Request Headers:
I0128 18:36:30.206814 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:30.206824 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:30.210023 121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0128 18:36:30.210046 121576 round_trippers.go:577] Response Headers:
I0128 18:36:30.210053 121576 round_trippers.go:580] Audit-Id: 611e31ce-60ee-42e1-88af-be34369063da
I0128 18:36:30.210059 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:30.210064 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:30.210073 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:30.210079 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:30.210088 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:30 GMT
I0128 18:36:30.210541 121576 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54993 chars]
I0128 18:36:30.212203 121576 system_pods.go:86] 8 kube-system pods found
I0128 18:36:30.212221 121576 system_pods.go:89] "coredns-787d4945fb-c28p8" [d87aee89-96d2-4627-a7ec-00a4d69653aa] Running
I0128 18:36:30.212226 121576 system_pods.go:89] "etcd-multinode-052675" [cf8dcb5a-42b0-44a1-aa07-56a3a6c1ff1d] Running
I0128 18:36:30.212231 121576 system_pods.go:89] "kindnet-8pkk5" [195e6421-dfdc-4781-bf15-3aa74552b4f8] Running
I0128 18:36:30.212235 121576 system_pods.go:89] "kube-apiserver-multinode-052675" [c9b8edb5-77fc-4191-b470-8a73c76a3a73] Running
I0128 18:36:30.212239 121576 system_pods.go:89] "kube-controller-manager-multinode-052675" [6dd849f3-f4b3-4704-a3c5-671cb6a2350c] Running
I0128 18:36:30.212243 121576 system_pods.go:89] "kube-proxy-hz5nz" [85457440-94b9-4686-be3e-dc5b5cbc0fbb] Running
I0128 18:36:30.212247 121576 system_pods.go:89] "kube-scheduler-multinode-052675" [b93c851a-ef3e-45a2-88b6-08bf615609f3] Running
I0128 18:36:30.212251 121576 system_pods.go:89] "storage-provisioner" [c317fca6-6da2-4fa0-9db8-6caf19aebf98] Running
I0128 18:36:30.212257 121576 system_pods.go:126] duration metric: took 202.982661ms to wait for k8s-apps to be running ...
I0128 18:36:30.212263 121576 system_svc.go:44] waiting for kubelet service to be running ....
I0128 18:36:30.212300 121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0128 18:36:30.221949 121576 system_svc.go:56] duration metric: took 9.674112ms WaitForService to wait for kubelet.
I0128 18:36:30.221971 121576 kubeadm.go:578] duration metric: took 10.548199142s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0128 18:36:30.221989 121576 node_conditions.go:102] verifying NodePressure condition ...
I0128 18:36:30.406392 121576 request.go:622] Waited for 184.323187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
I0128 18:36:30.406441 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
I0128 18:36:30.406446 121576 round_trippers.go:469] Request Headers:
I0128 18:36:30.406453 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:30.406459 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:30.408669 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:30.408690 121576 round_trippers.go:577] Response Headers:
I0128 18:36:30.408697 121576 round_trippers.go:580] Audit-Id: 109f5a11-ecaa-4210-857b-85b7807b1975
I0128 18:36:30.408703 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:30.408708 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:30.408713 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:30.408719 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:30.408725 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:30 GMT
I0128 18:36:30.408863 121576 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"412","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5054 chars]
I0128 18:36:30.409231 121576 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0128 18:36:30.409253 121576 node_conditions.go:123] node cpu capacity is 8
I0128 18:36:30.409268 121576 node_conditions.go:105] duration metric: took 187.274258ms to run NodePressure ...
I0128 18:36:30.409294 121576 start.go:228] waiting for startup goroutines ...
I0128 18:36:30.409303 121576 start.go:233] waiting for cluster config update ...
I0128 18:36:30.409315 121576 start.go:240] writing updated cluster config ...
I0128 18:36:30.412218 121576 out.go:177]
I0128 18:36:30.414078 121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:36:30.414155 121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:36:30.416389 121576 out.go:177] * Starting worker node multinode-052675-m02 in cluster multinode-052675
I0128 18:36:30.417861 121576 cache.go:120] Beginning downloading kic base image for docker with docker
I0128 18:36:30.419573 121576 out.go:177] * Pulling base image ...
I0128 18:36:30.421938 121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0128 18:36:30.421971 121576 cache.go:57] Caching tarball of preloaded images
I0128 18:36:30.422040 121576 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
I0128 18:36:30.422072 121576 preload.go:174] Found /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0128 18:36:30.422083 121576 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0128 18:36:30.422167 121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:36:30.444938 121576 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
I0128 18:36:30.444960 121576 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
I0128 18:36:30.444977 121576 cache.go:193] Successfully downloaded all kic artifacts
I0128 18:36:30.445006 121576 start.go:364] acquiring machines lock for multinode-052675-m02: {Name:mk6ab41f77e252b7e855a5b64fa8f991c0831770 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0128 18:36:30.445103 121576 start.go:368] acquired machines lock for "multinode-052675-m02" in 78.661µs
I0128 18:36:30.445125 121576 start.go:93] Provisioning new machine with config: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0128 18:36:30.445200 121576 start.go:125] createHost starting for "m02" (driver="docker")
I0128 18:36:30.447846 121576 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0128 18:36:30.447967 121576 start.go:159] libmachine.API.Create for "multinode-052675" (driver="docker")
I0128 18:36:30.447995 121576 client.go:168] LocalClient.Create starting
I0128 18:36:30.448068 121576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem
I0128 18:36:30.448095 121576 main.go:141] libmachine: Decoding PEM data...
I0128 18:36:30.448111 121576 main.go:141] libmachine: Parsing certificate...
I0128 18:36:30.448172 121576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem
I0128 18:36:30.448189 121576 main.go:141] libmachine: Decoding PEM data...
I0128 18:36:30.448203 121576 main.go:141] libmachine: Parsing certificate...
I0128 18:36:30.448388 121576 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0128 18:36:30.471277 121576 network_create.go:76] Found existing network {name:multinode-052675 subnet:0xc000ff7b90 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
I0128 18:36:30.471311 121576 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-052675-m02" container
I0128 18:36:30.471361 121576 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0128 18:36:30.495006 121576 cli_runner.go:164] Run: docker volume create multinode-052675-m02 --label name.minikube.sigs.k8s.io=multinode-052675-m02 --label created_by.minikube.sigs.k8s.io=true
I0128 18:36:30.517466 121576 oci.go:103] Successfully created a docker volume multinode-052675-m02
I0128 18:36:30.517533 121576 cli_runner.go:164] Run: docker run --rm --name multinode-052675-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-052675-m02 --entrypoint /usr/bin/test -v multinode-052675-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
I0128 18:36:31.069870 121576 oci.go:107] Successfully prepared a docker volume multinode-052675-m02
I0128 18:36:31.069909 121576 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0128 18:36:31.069928 121576 kic.go:190] Starting extracting preloaded images to volume ...
I0128 18:36:31.069992 121576 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-052675-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
I0128 18:36:36.023423 121576 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3259/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-052675-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (4.953360732s)
I0128 18:36:36.023459 121576 kic.go:199] duration metric: took 4.953526 seconds to extract preloaded images to volume
W0128 18:36:36.023616 121576 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0128 18:36:36.023730 121576 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0128 18:36:36.124263 121576 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-052675-m02 --name multinode-052675-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-052675-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-052675-m02 --network multinode-052675 --ip 192.168.58.3 --volume multinode-052675-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
I0128 18:36:36.498023 121576 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Running}}
I0128 18:36:36.526995 121576 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Status}}
I0128 18:36:36.551820 121576 cli_runner.go:164] Run: docker exec multinode-052675-m02 stat /var/lib/dpkg/alternatives/iptables
I0128 18:36:36.602014 121576 oci.go:144] the created container "multinode-052675-m02" has a running status.
I0128 18:36:36.602046 121576 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa...
I0128 18:36:36.807554 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0128 18:36:36.807595 121576 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0128 18:36:36.879258 121576 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Status}}
I0128 18:36:36.914092 121576 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0128 18:36:36.914117 121576 kic_runner.go:114] Args: [docker exec --privileged multinode-052675-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
I0128 18:36:36.992079 121576 cli_runner.go:164] Run: docker container inspect multinode-052675-m02 --format={{.State.Status}}
I0128 18:36:37.019415 121576 machine.go:88] provisioning docker machine ...
I0128 18:36:37.019455 121576 ubuntu.go:169] provisioning hostname "multinode-052675-m02"
I0128 18:36:37.019541 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:37.044812 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:36:37.044976 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0128 18:36:37.044998 121576 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-052675-m02 && echo "multinode-052675-m02" | sudo tee /etc/hostname
I0128 18:36:37.186481 121576 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-052675-m02
I0128 18:36:37.186556 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:37.211536 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:36:37.211680 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0128 18:36:37.211697 121576 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-052675-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-052675-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-052675-m02' | sudo tee -a /etc/hosts;
fi
fi
I0128 18:36:37.340333 121576 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0128 18:36:37.340363 121576 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3259/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3259/.minikube}
I0128 18:36:37.340383 121576 ubuntu.go:177] setting up certificates
I0128 18:36:37.340394 121576 provision.go:83] configureAuth start
I0128 18:36:37.340511 121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m02
I0128 18:36:37.364511 121576 provision.go:138] copyHostCerts
I0128 18:36:37.364549 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
I0128 18:36:37.364576 121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem, removing ...
I0128 18:36:37.364581 121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem
I0128 18:36:37.364647 121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/ca.pem (1082 bytes)
I0128 18:36:37.364725 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
I0128 18:36:37.364741 121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem, removing ...
I0128 18:36:37.364744 121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem
I0128 18:36:37.364766 121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/cert.pem (1123 bytes)
I0128 18:36:37.364818 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
I0128 18:36:37.364832 121576 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem, removing ...
I0128 18:36:37.364839 121576 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem
I0128 18:36:37.364859 121576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3259/.minikube/key.pem (1679 bytes)
I0128 18:36:37.364916 121576 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem org=jenkins.multinode-052675-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-052675-m02]
I0128 18:36:37.465118 121576 provision.go:172] copyRemoteCerts
I0128 18:36:37.465178 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0128 18:36:37.465211 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:37.489451 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
I0128 18:36:37.579876 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0128 18:36:37.579955 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0128 18:36:37.598767 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem -> /etc/docker/server.pem
I0128 18:36:37.598832 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0128 18:36:37.618529 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0128 18:36:37.618593 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0128 18:36:37.637264 121576 provision.go:86] duration metric: configureAuth took 296.857007ms
I0128 18:36:37.637293 121576 ubuntu.go:193] setting minikube options for container-runtime
I0128 18:36:37.637456 121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:36:37.637499 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:37.660948 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:36:37.661131 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0128 18:36:37.661149 121576 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0128 18:36:37.796548 121576 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0128 18:36:37.796570 121576 ubuntu.go:71] root file system type: overlay
I0128 18:36:37.796784 121576 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0128 18:36:37.796844 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:37.821221 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:36:37.821371 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0128 18:36:37.821432 121576 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.58.2"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0128 18:36:37.961020 121576 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.58.2
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0128 18:36:37.961085 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:37.985366 121576 main.go:141] libmachine: Using SSH client type: native
I0128 18:36:37.985514 121576 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0128 18:36:37.985532 121576 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0128 18:36:38.645200 121576 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-01-19 17:34:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-28 18:36:37.955978629 +0000
@@ -1,30 +1,33 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+Environment=NO_PROXY=192.168.58.2
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +35,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0128 18:36:38.645287 121576 machine.go:91] provisioned docker machine in 1.62584735s
I0128 18:36:38.645308 121576 client.go:171] LocalClient.Create took 8.197304103s
I0128 18:36:38.645337 121576 start.go:167] duration metric: libmachine.API.Create for "multinode-052675" took 8.197368977s
I0128 18:36:38.645364 121576 start.go:300] post-start starting for "multinode-052675-m02" (driver="docker")
I0128 18:36:38.645385 121576 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0128 18:36:38.645468 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0128 18:36:38.645527 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:38.671290 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
I0128 18:36:38.768512 121576 ssh_runner.go:195] Run: cat /etc/os-release
I0128 18:36:38.771066 121576 command_runner.go:130] > NAME="Ubuntu"
I0128 18:36:38.771090 121576 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
I0128 18:36:38.771097 121576 command_runner.go:130] > ID=ubuntu
I0128 18:36:38.771103 121576 command_runner.go:130] > ID_LIKE=debian
I0128 18:36:38.771108 121576 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
I0128 18:36:38.771112 121576 command_runner.go:130] > VERSION_ID="20.04"
I0128 18:36:38.771118 121576 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
I0128 18:36:38.771125 121576 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
I0128 18:36:38.771130 121576 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I0128 18:36:38.771139 121576 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I0128 18:36:38.771146 121576 command_runner.go:130] > VERSION_CODENAME=focal
I0128 18:36:38.771153 121576 command_runner.go:130] > UBUNTU_CODENAME=focal
I0128 18:36:38.771248 121576 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0128 18:36:38.771274 121576 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0128 18:36:38.771287 121576 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0128 18:36:38.771297 121576 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0128 18:36:38.771310 121576 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/addons for local assets ...
I0128 18:36:38.771369 121576 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3259/.minikube/files for local assets ...
I0128 18:36:38.771449 121576 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> 103532.pem in /etc/ssl/certs
I0128 18:36:38.771464 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> /etc/ssl/certs/103532.pem
I0128 18:36:38.771556 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0128 18:36:38.778377 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:36:38.798148 121576 start.go:303] post-start completed in 152.756842ms
I0128 18:36:38.798538 121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m02
I0128 18:36:38.822984 121576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/config.json ...
I0128 18:36:38.823284 121576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0128 18:36:38.823333 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:38.847642 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
I0128 18:36:38.936817 121576 command_runner.go:130] > 16%!
(MISSING)I0128 18:36:38.937023 121576 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0128 18:36:38.940834 121576 command_runner.go:130] > 246G
I0128 18:36:38.940977 121576 start.go:128] duration metric: createHost completed in 8.49576855s
I0128 18:36:38.940997 121576 start.go:83] releasing machines lock for "multinode-052675-m02", held for 8.495882404s
I0128 18:36:38.941082 121576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-052675-m02
I0128 18:36:38.967252 121576 out.go:177] * Found network options:
I0128 18:36:38.969215 121576 out.go:177] - NO_PROXY=192.168.58.2
W0128 18:36:38.971065 121576 proxy.go:119] fail to check proxy env: Error ip not in block
W0128 18:36:38.971126 121576 proxy.go:119] fail to check proxy env: Error ip not in block
I0128 18:36:38.971206 121576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0128 18:36:38.971245 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:38.971281 121576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0128 18:36:38.971332 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675-m02
I0128 18:36:38.997315 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
I0128 18:36:38.999813 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675-m02/id_rsa Username:docker}
I0128 18:36:39.121016 121576 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0128 18:36:39.121066 121576 command_runner.go:130] > File: /etc/cni/net.d/200-loopback.conf
I0128 18:36:39.121078 121576 command_runner.go:130] > Size: 54 Blocks: 8 IO Block: 4096 regular file
I0128 18:36:39.121085 121576 command_runner.go:130] > Device: e3h/227d Inode: 568458 Links: 1
I0128 18:36:39.121095 121576 command_runner.go:130] > Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
I0128 18:36:39.121103 121576 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
I0128 18:36:39.121111 121576 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
I0128 18:36:39.121118 121576 command_runner.go:130] > Change: 2023-01-28 18:22:00.814355792 +0000
I0128 18:36:39.121124 121576 command_runner.go:130] > Birth: -
I0128 18:36:39.121191 121576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0128 18:36:39.141685 121576 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0128 18:36:39.141797 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0128 18:36:39.148794 121576 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0128 18:36:39.161466 121576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0128 18:36:39.177917 121576 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf,
I0128 18:36:39.177964 121576 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0128 18:36:39.177984 121576 start.go:483] detecting cgroup driver to use...
I0128 18:36:39.178016 121576 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:36:39.178143 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:36:39.191245 121576 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0128 18:36:39.191280 121576 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0128 18:36:39.192045 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0128 18:36:39.200533 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0128 18:36:39.208865 121576 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0128 18:36:39.208931 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0128 18:36:39.216787 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:36:39.224383 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0128 18:36:39.232622 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0128 18:36:39.240862 121576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0128 18:36:39.248617 121576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0128 18:36:39.258134 121576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0128 18:36:39.264408 121576 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0128 18:36:39.264958 121576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0128 18:36:39.271530 121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:36:39.356085 121576 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0128 18:36:39.440673 121576 start.go:483] detecting cgroup driver to use...
I0128 18:36:39.440726 121576 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0128 18:36:39.440763 121576 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0128 18:36:39.451995 121576 command_runner.go:130] > # /lib/systemd/system/docker.service
I0128 18:36:39.452019 121576 command_runner.go:130] > [Unit]
I0128 18:36:39.452032 121576 command_runner.go:130] > Description=Docker Application Container Engine
I0128 18:36:39.452040 121576 command_runner.go:130] > Documentation=https://docs.docker.com
I0128 18:36:39.452047 121576 command_runner.go:130] > BindsTo=containerd.service
I0128 18:36:39.452056 121576 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
I0128 18:36:39.452063 121576 command_runner.go:130] > Wants=network-online.target
I0128 18:36:39.452068 121576 command_runner.go:130] > Requires=docker.socket
I0128 18:36:39.452072 121576 command_runner.go:130] > StartLimitBurst=3
I0128 18:36:39.452082 121576 command_runner.go:130] > StartLimitIntervalSec=60
I0128 18:36:39.452091 121576 command_runner.go:130] > [Service]
I0128 18:36:39.452101 121576 command_runner.go:130] > Type=notify
I0128 18:36:39.452110 121576 command_runner.go:130] > Restart=on-failure
I0128 18:36:39.452120 121576 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
I0128 18:36:39.452135 121576 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0128 18:36:39.452150 121576 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0128 18:36:39.452160 121576 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0128 18:36:39.452175 121576 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0128 18:36:39.452189 121576 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0128 18:36:39.452204 121576 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0128 18:36:39.452219 121576 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0128 18:36:39.452238 121576 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0128 18:36:39.452248 121576 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0128 18:36:39.452257 121576 command_runner.go:130] > ExecStart=
I0128 18:36:39.452283 121576 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
I0128 18:36:39.452295 121576 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0128 18:36:39.452305 121576 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0128 18:36:39.452319 121576 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0128 18:36:39.452327 121576 command_runner.go:130] > LimitNOFILE=infinity
I0128 18:36:39.452334 121576 command_runner.go:130] > LimitNPROC=infinity
I0128 18:36:39.452339 121576 command_runner.go:130] > LimitCORE=infinity
I0128 18:36:39.452351 121576 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0128 18:36:39.452363 121576 command_runner.go:130] > # Only systemd 226 and above support this version.
I0128 18:36:39.452373 121576 command_runner.go:130] > TasksMax=infinity
I0128 18:36:39.452384 121576 command_runner.go:130] > TimeoutStartSec=0
I0128 18:36:39.452397 121576 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0128 18:36:39.452406 121576 command_runner.go:130] > Delegate=yes
I0128 18:36:39.452420 121576 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0128 18:36:39.452432 121576 command_runner.go:130] > KillMode=process
I0128 18:36:39.452453 121576 command_runner.go:130] > [Install]
I0128 18:36:39.452460 121576 command_runner.go:130] > WantedBy=multi-user.target
I0128 18:36:39.452486 121576 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0128 18:36:39.452534 121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0128 18:36:39.461413 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0128 18:36:39.473228 121576 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0128 18:36:39.473259 121576 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0128 18:36:39.474224 121576 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0128 18:36:39.572515 121576 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0128 18:36:39.656210 121576 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0128 18:36:39.656252 121576 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0128 18:36:39.679177 121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:36:39.758690 121576 ssh_runner.go:195] Run: sudo systemctl restart docker
I0128 18:36:39.968276 121576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:36:40.053216 121576 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
I0128 18:36:40.053283 121576 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0128 18:36:40.127939 121576 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0128 18:36:40.200235 121576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0128 18:36:40.278497 121576 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0128 18:36:40.290297 121576 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0128 18:36:40.290366 121576 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0128 18:36:40.293667 121576 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0128 18:36:40.293695 121576 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0128 18:36:40.293722 121576 command_runner.go:130] > Device: ech/236d Inode: 206 Links: 1
I0128 18:36:40.293732 121576 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 999/ docker)
I0128 18:36:40.293743 121576 command_runner.go:130] > Access: 2023-01-28 18:36:40.284205966 +0000
I0128 18:36:40.293751 121576 command_runner.go:130] > Modify: 2023-01-28 18:36:40.284205966 +0000
I0128 18:36:40.293763 121576 command_runner.go:130] > Change: 2023-01-28 18:36:40.288206356 +0000
I0128 18:36:40.293772 121576 command_runner.go:130] > Birth: -
I0128 18:36:40.293793 121576 start.go:551] Will wait 60s for crictl version
I0128 18:36:40.293836 121576 ssh_runner.go:195] Run: which crictl
I0128 18:36:40.296677 121576 command_runner.go:130] > /usr/bin/crictl
I0128 18:36:40.296754 121576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0128 18:36:40.389499 121576 command_runner.go:130] > Version: 0.1.0
I0128 18:36:40.389524 121576 command_runner.go:130] > RuntimeName: docker
I0128 18:36:40.389532 121576 command_runner.go:130] > RuntimeVersion: 20.10.23
I0128 18:36:40.389561 121576 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0128 18:36:40.391399 121576 start.go:567] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.23
RuntimeApiVersion: v1alpha2
I0128 18:36:40.391465 121576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:36:40.418729 121576 command_runner.go:130] > 20.10.23
I0128 18:36:40.420063 121576 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0128 18:36:40.446068 121576 command_runner.go:130] > 20.10.23
I0128 18:36:40.448985 121576 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
I0128 18:36:40.451061 121576 out.go:177] - env NO_PROXY=192.168.58.2
I0128 18:36:40.452613 121576 cli_runner.go:164] Run: docker network inspect multinode-052675 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0128 18:36:40.475321 121576 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0128 18:36:40.478656 121576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:36:40.488113 121576 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675 for IP: 192.168.58.3
I0128 18:36:40.488148 121576 certs.go:186] acquiring lock for shared ca certs: {Name:mk283707adcbf18cf93dab5399aa9ec0bae25e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0128 18:36:40.488269 121576 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key
I0128 18:36:40.488305 121576 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key
I0128 18:36:40.488316 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0128 18:36:40.488329 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0128 18:36:40.488339 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0128 18:36:40.488349 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0128 18:36:40.488393 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem (1338 bytes)
W0128 18:36:40.488420 121576 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353_empty.pem, impossibly tiny 0 bytes
I0128 18:36:40.488429 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca-key.pem (1675 bytes)
I0128 18:36:40.488489 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/ca.pem (1082 bytes)
I0128 18:36:40.488515 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/cert.pem (1123 bytes)
I0128 18:36:40.488536 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/home/jenkins/minikube-integration/15565-3259/.minikube/certs/key.pem (1679 bytes)
I0128 18:36:40.488577 121576 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem (1708 bytes)
I0128 18:36:40.488613 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem -> /usr/share/ca-certificates/10353.pem
I0128 18:36:40.488626 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem -> /usr/share/ca-certificates/103532.pem
I0128 18:36:40.488638 121576 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0128 18:36:40.488957 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0128 18:36:40.507014 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0128 18:36:40.524478 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0128 18:36:40.543353 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0128 18:36:40.560429 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/certs/10353.pem --> /usr/share/ca-certificates/10353.pem (1338 bytes)
I0128 18:36:40.578015 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/files/etc/ssl/certs/103532.pem --> /usr/share/ca-certificates/103532.pem (1708 bytes)
I0128 18:36:40.595313 121576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0128 18:36:40.613204 121576 ssh_runner.go:195] Run: openssl version
I0128 18:36:40.618093 121576 command_runner.go:130] > OpenSSL 1.1.1f 31 Mar 2020
I0128 18:36:40.618185 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/103532.pem && ln -fs /usr/share/ca-certificates/103532.pem /etc/ssl/certs/103532.pem"
I0128 18:36:40.626593 121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/103532.pem
I0128 18:36:40.629593 121576 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
I0128 18:36:40.629631 121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:25 /usr/share/ca-certificates/103532.pem
I0128 18:36:40.629675 121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/103532.pem
I0128 18:36:40.634281 121576 command_runner.go:130] > 3ec20f2e
I0128 18:36:40.634474 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/103532.pem /etc/ssl/certs/3ec20f2e.0"
I0128 18:36:40.641620 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0128 18:36:40.648710 121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0128 18:36:40.651424 121576 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
I0128 18:36:40.651520 121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:22 /usr/share/ca-certificates/minikubeCA.pem
I0128 18:36:40.651563 121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0128 18:36:40.655814 121576 command_runner.go:130] > b5213941
I0128 18:36:40.655989 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0128 18:36:40.662836 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10353.pem && ln -fs /usr/share/ca-certificates/10353.pem /etc/ssl/certs/10353.pem"
I0128 18:36:40.669614 121576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10353.pem
I0128 18:36:40.672366 121576 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
I0128 18:36:40.672504 121576 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:25 /usr/share/ca-certificates/10353.pem
I0128 18:36:40.672544 121576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10353.pem
I0128 18:36:40.676971 121576 command_runner.go:130] > 51391683
I0128 18:36:40.677135 121576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10353.pem /etc/ssl/certs/51391683.0"
I0128 18:36:40.683930 121576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0128 18:36:40.748516 121576 command_runner.go:130] > cgroupfs
I0128 18:36:40.751678 121576 cni.go:84] Creating CNI manager for ""
I0128 18:36:40.751703 121576 cni.go:136] 2 nodes found, recommending kindnet
I0128 18:36:40.751716 121576 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0128 18:36:40.751737 121576 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-052675 NodeName:multinode-052675-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0128 18:36:40.751904 121576 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.3
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-052675-m02"
kubeletExtraArgs:
node-ip: 192.168.58.3
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0128 18:36:40.751981 121576 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-052675-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0128 18:36:40.752026 121576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0128 18:36:40.758803 121576 command_runner.go:130] > kubeadm
I0128 18:36:40.758835 121576 command_runner.go:130] > kubectl
I0128 18:36:40.758841 121576 command_runner.go:130] > kubelet
I0128 18:36:40.759320 121576 binaries.go:44] Found k8s binaries, skipping transfer
I0128 18:36:40.759384 121576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0128 18:36:40.766330 121576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0128 18:36:40.779694 121576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0128 18:36:40.793186 121576 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0128 18:36:40.796094 121576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0128 18:36:40.806174 121576 host.go:66] Checking if "multinode-052675" exists ...
I0128 18:36:40.806443 121576 config.go:180] Loaded profile config "multinode-052675": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0128 18:36:40.806396 121576 start.go:299] JoinCluster: &{Name:multinode-052675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-052675 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0128 18:36:40.806509 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0128 18:36:40.806559 121576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-052675
I0128 18:36:40.830799 121576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3259/.minikube/machines/multinode-052675/id_rsa Username:docker}
I0128 18:36:40.976483 121576 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ylk1e7.vhmzv49ssdy5cgya --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc
I0128 18:36:40.976545 121576 start.go:320] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0128 18:36:40.976581 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ylk1e7.vhmzv49ssdy5cgya --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m02"
I0128 18:36:41.015053 121576 command_runner.go:130] > [preflight] Running pre-flight checks
I0128 18:36:41.041547 121576 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
I0128 18:36:41.041577 121576 command_runner.go:130] > [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
I0128 18:36:41.041584 121576 command_runner.go:130] > [0;37mOS[0m: [0;32mLinux[0m
I0128 18:36:41.041592 121576 command_runner.go:130] > [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0128 18:36:41.041600 121576 command_runner.go:130] > [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0128 18:36:41.041607 121576 command_runner.go:130] > [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0128 18:36:41.041614 121576 command_runner.go:130] > [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0128 18:36:41.041622 121576 command_runner.go:130] > [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0128 18:36:41.041631 121576 command_runner.go:130] > [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0128 18:36:41.041646 121576 command_runner.go:130] > [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0128 18:36:41.041657 121576 command_runner.go:130] > [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0128 18:36:41.041667 121576 command_runner.go:130] > [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0128 18:36:41.124938 121576 command_runner.go:130] > [preflight] Reading configuration from the cluster...
I0128 18:36:41.124971 121576 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0128 18:36:41.152033 121576 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0128 18:36:41.152062 121576 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0128 18:36:41.152069 121576 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0128 18:36:41.233531 121576 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0128 18:36:42.752176 121576 command_runner.go:130] > This node has joined the cluster:
I0128 18:36:42.752204 121576 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
I0128 18:36:42.752210 121576 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
I0128 18:36:42.752217 121576 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
I0128 18:36:42.754759 121576 command_runner.go:130] ! W0128 18:36:41.014600 1345 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0128 18:36:42.754794 121576 command_runner.go:130] ! [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
I0128 18:36:42.754804 121576 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0128 18:36:42.754821 121576 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ylk1e7.vhmzv49ssdy5cgya --discovery-token-ca-cert-hash sha256:d4bcfb24622aca498a3a9023e04529ba11f586e6bd009c868882b449f978b0bc --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-052675-m02": (1.778225763s)
I0128 18:36:42.754836 121576 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0128 18:36:42.842093 121576 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
I0128 18:36:42.918534 121576 start.go:301] JoinCluster complete in 2.112133258s
I0128 18:36:42.918559 121576 cni.go:84] Creating CNI manager for ""
I0128 18:36:42.918564 121576 cni.go:136] 2 nodes found, recommending kindnet
I0128 18:36:42.918600 121576 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0128 18:36:42.921867 121576 command_runner.go:130] > File: /opt/cni/bin/portmap
I0128 18:36:42.921894 121576 command_runner.go:130] > Size: 2828728 Blocks: 5528 IO Block: 4096 regular file
I0128 18:36:42.921910 121576 command_runner.go:130] > Device: 34h/52d Inode: 566552 Links: 1
I0128 18:36:42.921920 121576 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0128 18:36:42.921929 121576 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
I0128 18:36:42.921939 121576 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
I0128 18:36:42.921946 121576 command_runner.go:130] > Change: 2023-01-28 18:22:00.070283151 +0000
I0128 18:36:42.921952 121576 command_runner.go:130] > Birth: -
I0128 18:36:42.921998 121576 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
I0128 18:36:42.922009 121576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0128 18:36:42.935745 121576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0128 18:36:43.088293 121576 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0128 18:36:43.091858 121576 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0128 18:36:43.094168 121576 command_runner.go:130] > serviceaccount/kindnet unchanged
I0128 18:36:43.106856 121576 command_runner.go:130] > daemonset.apps/kindnet configured
I0128 18:36:43.110918 121576 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3259/kubeconfig
I0128 18:36:43.111135 121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0128 18:36:43.111398 121576 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0128 18:36:43.111408 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.111416 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.111422 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.113568 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:43.113592 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.113601 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.113609 121576 round_trippers.go:580] Audit-Id: 0d4f7448-78d2-45da-baa4-0f8b4b1bc78d
I0128 18:36:43.113617 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.113625 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.113634 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.113650 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.113660 121576 round_trippers.go:580] Content-Length: 291
I0128 18:36:43.113702 121576 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"fbc2f69e-4ede-442d-b610-9d362fe4c9ff","resourceVersion":"428","creationTimestamp":"2023-01-28T18:36:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0128 18:36:43.113815 121576 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-052675" context rescaled to 1 replicas
I0128 18:36:43.113846 121576 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0128 18:36:43.117234 121576 out.go:177] * Verifying Kubernetes components...
I0128 18:36:43.119329 121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0128 18:36:43.129543 121576 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3259/kubeconfig
I0128 18:36:43.129791 121576 kapi.go:59] client config for multinode-052675: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/profiles/multinode-052675/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3259/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x18895c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0128 18:36:43.130045 121576 node_ready.go:35] waiting up to 6m0s for node "multinode-052675-m02" to be "Ready" ...
I0128 18:36:43.130114 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
I0128 18:36:43.130124 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.130131 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.130138 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.132268 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:43.132297 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.132305 121576 round_trippers.go:580] Audit-Id: 46e35ead-b028-4a54-9a6a-c2cc83b6a177
I0128 18:36:43.132314 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.132324 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.132333 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.132346 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.132355 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.132513 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
I0128 18:36:43.132879 121576 node_ready.go:49] node "multinode-052675-m02" has status "Ready":"True"
I0128 18:36:43.132909 121576 node_ready.go:38] duration metric: took 2.838167ms waiting for node "multinode-052675-m02" to be "Ready" ...
I0128 18:36:43.132923 121576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0128 18:36:43.133010 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0128 18:36:43.133021 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.133031 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.133043 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.136130 121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0128 18:36:43.136160 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.136175 121576 round_trippers.go:580] Audit-Id: 6cb55554-134f-4dfc-87d5-a387dab56006
I0128 18:36:43.136183 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.136195 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.136208 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.136219 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.136234 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.136716 121576 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"472"},"items":[{"metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65332 chars]
I0128 18:36:43.138683 121576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-c28p8" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.138742 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-c28p8
I0128 18:36:43.138750 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.138757 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.138771 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.140963 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:43.140986 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.140994 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.141004 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.141012 121576 round_trippers.go:580] Audit-Id: 4ba6aa82-e083-416c-bac1-7840ffd40ca0
I0128 18:36:43.141024 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.141036 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.141047 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.141146 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-c28p8","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"d87aee89-96d2-4627-a7ec-00a4d69653aa","resourceVersion":"424","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"9361ba59-8371-40a9-b9b9-8727e0039b00","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9361ba59-8371-40a9-b9b9-8727e0039b00\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5942 chars]
I0128 18:36:43.141763 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:43.141781 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.141792 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.141802 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.143630 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:43.143651 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.143657 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.143663 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.143668 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.143678 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.143683 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.143691 121576 round_trippers.go:580] Audit-Id: 73011a9e-1d73-42f2-9dbd-154177538634
I0128 18:36:43.143784 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
I0128 18:36:43.144065 121576 pod_ready.go:92] pod "coredns-787d4945fb-c28p8" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:43.144078 121576 pod_ready.go:81] duration metric: took 5.377157ms waiting for pod "coredns-787d4945fb-c28p8" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.144087 121576 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.144128 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-052675
I0128 18:36:43.144135 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.144142 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.144149 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.145814 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:43.145835 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.145846 121576 round_trippers.go:580] Audit-Id: ec1f57af-0d79-4d62-b208-bcbd3e3e4819
I0128 18:36:43.145853 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.145861 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.145867 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.145879 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.145891 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.145992 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-052675","namespace":"kube-system","uid":"cf8dcb5a-42b0-44a1-aa07-56a3a6c1ff1d","resourceVersion":"261","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11ebc72e731e7d22158ad52d97ae7480","kubernetes.io/config.mirror":"11ebc72e731e7d22158ad52d97ae7480","kubernetes.io/config.seen":"2023-01-28T18:36:05.844239404Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
I0128 18:36:43.146372 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:43.146385 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.146392 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.146399 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.147987 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:43.148008 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.148017 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.148023 121576 round_trippers.go:580] Audit-Id: 9739b17f-cb68-4aed-931c-be147d044104
I0128 18:36:43.148031 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.148044 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.148059 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.148068 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.148175 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
I0128 18:36:43.148474 121576 pod_ready.go:92] pod "etcd-multinode-052675" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:43.148488 121576 pod_ready.go:81] duration metric: took 4.39583ms waiting for pod "etcd-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.148501 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.148537 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-052675
I0128 18:36:43.148544 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.148551 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.148557 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.150306 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:43.150340 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.150352 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.150362 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.150371 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.150382 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.150390 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.150398 121576 round_trippers.go:580] Audit-Id: 924049d3-2248-4162-9eb1-bd8752c395b4
I0128 18:36:43.150521 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-052675","namespace":"kube-system","uid":"c9b8edb5-77fc-4191-b470-8a73c76a3a73","resourceVersion":"291","creationTimestamp":"2023-01-28T18:36:05Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"67b267479ac4834e2613b5155d6d00dd","kubernetes.io/config.mirror":"67b267479ac4834e2613b5155d6d00dd","kubernetes.io/config.seen":"2023-01-28T18:35:55.862480624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
I0128 18:36:43.150919 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:43.150932 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.150938 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.150945 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.152524 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:43.152547 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.152556 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.152565 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.152580 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.152588 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.152597 121576 round_trippers.go:580] Audit-Id: 411ade94-1797-4a2f-bc5e-2b870c32eb22
I0128 18:36:43.152606 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.152684 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
I0128 18:36:43.152978 121576 pod_ready.go:92] pod "kube-apiserver-multinode-052675" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:43.152996 121576 pod_ready.go:81] duration metric: took 4.490065ms waiting for pod "kube-apiserver-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.153005 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.153046 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-052675
I0128 18:36:43.153053 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.153059 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.153065 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.154708 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:43.154724 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.154731 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.154738 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.154746 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.154763 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.154772 121576 round_trippers.go:580] Audit-Id: c63dabd3-7958-4cdc-b20a-aad5bdf90d09
I0128 18:36:43.154780 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.154905 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-052675","namespace":"kube-system","uid":"6dd849f3-f4b3-4704-a3c5-671cb6a2350c","resourceVersion":"276","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"df8dfac1e7b7f039ea2eca812f9510dc","kubernetes.io/config.mirror":"df8dfac1e7b7f039ea2eca812f9510dc","kubernetes.io/config.seen":"2023-01-28T18:36:05.844267614Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
I0128 18:36:43.155325 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:43.155337 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.155344 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.155351 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.156837 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:43.156851 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.156858 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.156863 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.156868 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.156873 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.156878 121576 round_trippers.go:580] Audit-Id: adb7b46b-4b8d-4355-8f33-79e60a6e24cb
I0128 18:36:43.156883 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.156946 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
I0128 18:36:43.157201 121576 pod_ready.go:92] pod "kube-controller-manager-multinode-052675" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:43.157211 121576 pod_ready.go:81] duration metric: took 4.198488ms waiting for pod "kube-controller-manager-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.157218 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8btnm" in "kube-system" namespace to be "Ready" ...
I0128 18:36:43.330638 121576 request.go:622] Waited for 173.322814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
I0128 18:36:43.330687 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
I0128 18:36:43.330691 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.330698 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.330705 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.332919 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:43.332944 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.332954 121576 round_trippers.go:580] Audit-Id: a1683f59-f481-4bfb-8c5b-3116e080cf41
I0128 18:36:43.332962 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.332969 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.332978 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.332986 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.332995 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.333113 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8btnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd10af1e-3564-461b-984e-a87970be2539","resourceVersion":"458","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
I0128 18:36:43.530918 121576 request.go:622] Waited for 197.36611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
I0128 18:36:43.530965 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
I0128 18:36:43.530972 121576 round_trippers.go:469] Request Headers:
I0128 18:36:43.530980 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:43.530986 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:43.533260 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:43.533324 121576 round_trippers.go:577] Response Headers:
I0128 18:36:43.533340 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:43.533348 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:43.533355 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:43.533364 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:43 GMT
I0128 18:36:43.533370 121576 round_trippers.go:580] Audit-Id: bf890605-ab19-4fb0-a42c-34089281c630
I0128 18:36:43.533378 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:43.533463 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
I0128 18:36:44.035366 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
I0128 18:36:44.035392 121576 round_trippers.go:469] Request Headers:
I0128 18:36:44.035404 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:44.035414 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:44.037983 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:44.038016 121576 round_trippers.go:577] Response Headers:
I0128 18:36:44.038031 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:44.038040 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:44.038047 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:44.038056 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:44.038070 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:44 GMT
I0128 18:36:44.038080 121576 round_trippers.go:580] Audit-Id: eec19d04-62c0-4f95-912d-69519fc965be
I0128 18:36:44.038248 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8btnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd10af1e-3564-461b-984e-a87970be2539","resourceVersion":"475","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
I0128 18:36:44.038868 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
I0128 18:36:44.038889 121576 round_trippers.go:469] Request Headers:
I0128 18:36:44.038902 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:44.038912 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:44.041472 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:44.041498 121576 round_trippers.go:577] Response Headers:
I0128 18:36:44.041507 121576 round_trippers.go:580] Audit-Id: 9df2c3c2-81a4-40f4-8a43-2208d0bc4cf1
I0128 18:36:44.041515 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:44.041523 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:44.041531 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:44.041546 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:44.041558 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:44 GMT
I0128 18:36:44.041675 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
I0128 18:36:44.535255 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
I0128 18:36:44.535278 121576 round_trippers.go:469] Request Headers:
I0128 18:36:44.535290 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:44.535299 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:44.539063 121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0128 18:36:44.539095 121576 round_trippers.go:577] Response Headers:
I0128 18:36:44.539107 121576 round_trippers.go:580] Audit-Id: 63d8c775-c92f-493b-80d6-f6f63dc44ad8
I0128 18:36:44.539117 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:44.539127 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:44.539137 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:44.539146 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:44.539157 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:44 GMT
I0128 18:36:44.539292 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8btnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd10af1e-3564-461b-984e-a87970be2539","resourceVersion":"475","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
I0128 18:36:44.539876 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
I0128 18:36:44.539894 121576 round_trippers.go:469] Request Headers:
I0128 18:36:44.539904 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:44.539917 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:44.541755 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:44.541776 121576 round_trippers.go:577] Response Headers:
I0128 18:36:44.541786 121576 round_trippers.go:580] Audit-Id: a17e8604-aac9-4536-9b63-e678db583453
I0128 18:36:44.541792 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:44.541797 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:44.541802 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:44.541809 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:44.541826 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:44 GMT
I0128 18:36:44.541913 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
I0128 18:36:45.034523 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8btnm
I0128 18:36:45.034542 121576 round_trippers.go:469] Request Headers:
I0128 18:36:45.034550 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:45.034556 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:45.036646 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:45.036668 121576 round_trippers.go:577] Response Headers:
I0128 18:36:45.036679 121576 round_trippers.go:580] Audit-Id: 0144aace-231f-4c28-bc5d-15e161d7ea9c
I0128 18:36:45.036686 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:45.036692 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:45.036697 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:45.036707 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:45.036712 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:45 GMT
I0128 18:36:45.036829 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8btnm","generateName":"kube-proxy-","namespace":"kube-system","uid":"dd10af1e-3564-461b-984e-a87970be2539","resourceVersion":"483","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0128 18:36:45.037323 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675-m02
I0128 18:36:45.037335 121576 round_trippers.go:469] Request Headers:
I0128 18:36:45.037342 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:45.037348 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:45.038929 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:45.038948 121576 round_trippers.go:577] Response Headers:
I0128 18:36:45.038957 121576 round_trippers.go:580] Audit-Id: 58d8d617-5cad-43cf-a5f2-1b63ff74a55c
I0128 18:36:45.038964 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:45.038972 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:45.038981 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:45.038992 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:45.039008 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:45 GMT
I0128 18:36:45.039097 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675-m02","uid":"035144f1-2a0b-4b51-ba60-ad9469ce9b49","resourceVersion":"472","creationTimestamp":"2023-01-28T18:36:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:41Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4061 chars]
I0128 18:36:45.039377 121576 pod_ready.go:92] pod "kube-proxy-8btnm" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:45.039397 121576 pod_ready.go:81] duration metric: took 1.882175089s waiting for pod "kube-proxy-8btnm" in "kube-system" namespace to be "Ready" ...
I0128 18:36:45.039407 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hz5nz" in "kube-system" namespace to be "Ready" ...
I0128 18:36:45.039449 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hz5nz
I0128 18:36:45.039456 121576 round_trippers.go:469] Request Headers:
I0128 18:36:45.039463 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:45.039469 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:45.041062 121576 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0128 18:36:45.041082 121576 round_trippers.go:577] Response Headers:
I0128 18:36:45.041089 121576 round_trippers.go:580] Audit-Id: d2e1a54f-3c5c-40fb-99aa-3bbce32a3ef7
I0128 18:36:45.041097 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:45.041104 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:45.041112 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:45.041130 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:45.041138 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:45 GMT
I0128 18:36:45.041258 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hz5nz","generateName":"kube-proxy-","namespace":"kube-system","uid":"85457440-94b9-4686-be3e-dc5b5cbc0fbb","resourceVersion":"390","creationTimestamp":"2023-01-28T18:36:18Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffe5a4a7-0557-4d5b-a9ec-dcd83463ca8e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
I0128 18:36:45.130879 121576 request.go:622] Waited for 89.230928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:45.130930 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:45.130935 121576 round_trippers.go:469] Request Headers:
I0128 18:36:45.130943 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:45.130949 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:45.133379 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:45.133407 121576 round_trippers.go:577] Response Headers:
I0128 18:36:45.133418 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:45 GMT
I0128 18:36:45.133425 121576 round_trippers.go:580] Audit-Id: b2c70ba2-dbbd-43d5-b0ea-915e4e5ca6e2
I0128 18:36:45.133431 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:45.133436 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:45.133442 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:45.133451 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:45.133544 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
I0128 18:36:45.133878 121576 pod_ready.go:92] pod "kube-proxy-hz5nz" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:45.133891 121576 pod_ready.go:81] duration metric: took 94.475521ms waiting for pod "kube-proxy-hz5nz" in "kube-system" namespace to be "Ready" ...
I0128 18:36:45.133901 121576 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:45.330258 121576 request.go:622] Waited for 196.285598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-052675
I0128 18:36:45.330337 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-052675
I0128 18:36:45.330347 121576 round_trippers.go:469] Request Headers:
I0128 18:36:45.330360 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:45.330373 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:45.333422 121576 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0128 18:36:45.333452 121576 round_trippers.go:577] Response Headers:
I0128 18:36:45.333464 121576 round_trippers.go:580] Audit-Id: 9e88eb74-afdf-4872-98fd-db150d835c02
I0128 18:36:45.333473 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:45.333482 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:45.333491 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:45.333503 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:45.333510 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:45 GMT
I0128 18:36:45.333659 121576 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-052675","namespace":"kube-system","uid":"b93c851a-ef3e-45a2-88b6-08bf615609f3","resourceVersion":"263","creationTimestamp":"2023-01-28T18:36:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d47615414c8bc24a9efcf31abc68d62c","kubernetes.io/config.mirror":"d47615414c8bc24a9efcf31abc68d62c","kubernetes.io/config.seen":"2023-01-28T18:36:05.844268554Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-28T18:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
I0128 18:36:45.530517 121576 request.go:622] Waited for 196.35734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:45.530578 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-052675
I0128 18:36:45.530589 121576 round_trippers.go:469] Request Headers:
I0128 18:36:45.530602 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:45.530616 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:45.533138 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:45.533159 121576 round_trippers.go:577] Response Headers:
I0128 18:36:45.533169 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:45.533178 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:45.533187 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:45 GMT
I0128 18:36:45.533201 121576 round_trippers.go:580] Audit-Id: 17906d91-9566-4c00-bcdc-7baa438bfb0a
I0128 18:36:45.533214 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:45.533227 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:45.533346 121576 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-28T18:36:02Z","fieldsType":"FieldsV1","fi [truncated 5163 chars]
I0128 18:36:45.533760 121576 pod_ready.go:92] pod "kube-scheduler-multinode-052675" in "kube-system" namespace has status "Ready":"True"
I0128 18:36:45.533779 121576 pod_ready.go:81] duration metric: took 399.869077ms waiting for pod "kube-scheduler-multinode-052675" in "kube-system" namespace to be "Ready" ...
I0128 18:36:45.533794 121576 pod_ready.go:38] duration metric: took 2.40085647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0128 18:36:45.533817 121576 system_svc.go:44] waiting for kubelet service to be running ....
I0128 18:36:45.533866 121576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0128 18:36:45.574170 121576 system_svc.go:56] duration metric: took 40.343931ms WaitForService to wait for kubelet.
I0128 18:36:45.574201 121576 kubeadm.go:578] duration metric: took 2.460321991s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0128 18:36:45.574226 121576 node_conditions.go:102] verifying NodePressure condition ...
I0128 18:36:45.730698 121576 request.go:622] Waited for 156.388209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
I0128 18:36:45.730772 121576 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
I0128 18:36:45.730781 121576 round_trippers.go:469] Request Headers:
I0128 18:36:45.730791 121576 round_trippers.go:473] Accept: application/json, */*
I0128 18:36:45.730801 121576 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0128 18:36:45.733356 121576 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0128 18:36:45.733379 121576 round_trippers.go:577] Response Headers:
I0128 18:36:45.733390 121576 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: f52fbe36-fb42-402a-a09a-47351fccf146
I0128 18:36:45.733398 121576 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: d17b216e-00fa-413c-ae40-c9096317aec9
I0128 18:36:45.733405 121576 round_trippers.go:580] Date: Sat, 28 Jan 2023 18:36:45 GMT
I0128 18:36:45.733414 121576 round_trippers.go:580] Audit-Id: 698ec73f-1071-439b-becc-e2d689f805e7
I0128 18:36:45.733421 121576 round_trippers.go:580] Cache-Control: no-cache, private
I0128 18:36:45.733430 121576 round_trippers.go:580] Content-Type: application/json
I0128 18:36:45.733594 121576 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"486"},"items":[{"metadata":{"name":"multinode-052675","uid":"268effb4-8027-4e54-b039-a0904cd4acde","resourceVersion":"436","creationTimestamp":"2023-01-28T18:36:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-052675","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0b7a59349a2d83a39298292bdec73f3c39ac1090","minikube.k8s.io/name":"multinode-052675","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_28T18_36_06_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10269 chars]
I0128 18:36:45.734082 121576 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0128 18:36:45.734100 121576 node_conditions.go:123] node cpu capacity is 8
I0128 18:36:45.734113 121576 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0128 18:36:45.734119 121576 node_conditions.go:123] node cpu capacity is 8
I0128 18:36:45.734124 121576 node_conditions.go:105] duration metric: took 159.892379ms to run NodePressure ...
I0128 18:36:45.734138 121576 start.go:228] waiting for startup goroutines ...
I0128 18:36:45.734148 121576 start.go:240] writing updated cluster config ...
I0128 18:36:45.759094 121576 ssh_runner.go:195] Run: rm -f paused
I0128 18:36:45.816971 121576 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
I0128 18:36:45.821156 121576 out.go:177] * Done! kubectl is now configured to use "multinode-052675" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Sat 2023-01-28 18:35:45 UTC, end at Sat 2023-01-28 18:39:56 UTC. --
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.465696045Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.467831566Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.467857592Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.467875883Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.467884163Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.478296235Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.478321919Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.478327287Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.478476443Z" level=info msg="Loading containers: start."
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.557652750Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.599021222Z" level=info msg="Loading containers: done."
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.609500824Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.609559862Z" level=info msg="Daemon has completed initialization"
Jan 28 18:35:51 multinode-052675 systemd[1]: Started Docker Application Container Engine.
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.628354782Z" level=info msg="API listen on [::]:2376"
Jan 28 18:35:51 multinode-052675 dockerd[928]: time="2023-01-28T18:35:51.632111306Z" level=info msg="API listen on /var/run/docker.sock"
Jan 28 18:36:20 multinode-052675 dockerd[928]: time="2023-01-28T18:36:20.204103943Z" level=info msg="ignoring event" container=8a1ae5e27e92612a02b6a8fc51ad3571fa87d2715702914217ad377e0b906466 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 28 18:36:20 multinode-052675 dockerd[928]: time="2023-01-28T18:36:20.592188993Z" level=info msg="ignoring event" container=88ecb2f902999c079288b99bd89bbfab63c88f490278f02ec03640fbb04e976c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 28 18:36:21 multinode-052675 dockerd[928]: time="2023-01-28T18:36:21.105618127Z" level=info msg="ignoring event" container=b89a334adecaefc426798a148bababa79049022aa49faa427d14cf48bc59860e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 28 18:36:22 multinode-052675 dockerd[928]: time="2023-01-28T18:36:22.113395644Z" level=info msg="ignoring event" container=ec7a6aa2ca969bb401287aed4fd63503e2c68a2b830d6f2e4c8b01fd99cc775c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 28 18:36:22 multinode-052675 dockerd[928]: time="2023-01-28T18:36:22.793066702Z" level=info msg="ignoring event" container=d9b7f41b12e705e71977b3f62ede990c0c3fe51cc614aa09dcc559f8918198eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 28 18:36:23 multinode-052675 dockerd[928]: time="2023-01-28T18:36:23.790651650Z" level=info msg="ignoring event" container=5e25a79d1a60cc3b21d7a69a05711159b888d24fad0bb0774957e77f3b710441 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 28 18:36:24 multinode-052675 dockerd[928]: time="2023-01-28T18:36:24.826643855Z" level=info msg="ignoring event" container=676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 28 18:36:25 multinode-052675 dockerd[928]: time="2023-01-28T18:36:25.833532899Z" level=info msg="ignoring event" container=f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 28 18:36:26 multinode-052675 dockerd[928]: time="2023-01-28T18:36:26.858053500Z" level=info msg="ignoring event" container=5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
7f0e24e944cec gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 3 minutes ago Running busybox 0 cce38480cd66f
93745ccc8cb55 5185b96f0becf 3 minutes ago Running coredns 0 45f0655a1ddb9
46773a35b11bd kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe 3 minutes ago Running kindnet-cni 0 8fc19cc66c4f3
aeb357b4e2094 6e38f40d628db 3 minutes ago Running storage-provisioner 0 f88feb83a6497
acc3adc5776a5 46a6bb3c77ce0 3 minutes ago Running kube-proxy 0 54b173c8cf0ca
a377326949167 deb04688c4a35 3 minutes ago Running kube-apiserver 0 65692890c63e7
2e6c4095a9938 655493523f607 3 minutes ago Running kube-scheduler 0 9cef735af13e0
90ac627c99fcf e9c08e11b07f6 3 minutes ago Running kube-controller-manager 0 de160ca186d78
c4215b5f1c76b fce326961ae2d 3 minutes ago Running etcd 0 52885346a4282
*
* ==> coredns [93745ccc8cb5] <==
* [INFO] 10.244.1.2:35138 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130331s
[INFO] 10.244.0.3:58973 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139836s
[INFO] 10.244.0.3:55407 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001990074s
[INFO] 10.244.0.3:59073 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118522s
[INFO] 10.244.0.3:52141 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088195s
[INFO] 10.244.0.3:35586 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001580209s
[INFO] 10.244.0.3:43340 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083768s
[INFO] 10.244.0.3:39293 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071122s
[INFO] 10.244.0.3:59044 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065519s
[INFO] 10.244.1.2:42075 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180039s
[INFO] 10.244.1.2:56436 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108243s
[INFO] 10.244.1.2:46724 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100908s
[INFO] 10.244.1.2:40322 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107786s
[INFO] 10.244.0.3:48174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163856s
[INFO] 10.244.0.3:38165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096818s
[INFO] 10.244.0.3:39710 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066036s
[INFO] 10.244.0.3:57439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097139s
[INFO] 10.244.1.2:43000 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164401s
[INFO] 10.244.1.2:34418 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156548s
[INFO] 10.244.1.2:52316 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147518s
[INFO] 10.244.1.2:59610 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100187s
[INFO] 10.244.0.3:34048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013944s
[INFO] 10.244.0.3:43257 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112776s
[INFO] 10.244.0.3:41888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009795s
[INFO] 10.244.0.3:52087 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008824s
*
* ==> describe nodes <==
* Name: multinode-052675
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-052675
kubernetes.io/os=linux
minikube.k8s.io/commit=0b7a59349a2d83a39298292bdec73f3c39ac1090
minikube.k8s.io/name=multinode-052675
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_28T18_36_06_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 28 Jan 2023 18:36:02 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-052675
AcquireTime: <unset>
RenewTime: Sat, 28 Jan 2023 18:39:50 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 28 Jan 2023 18:37:07 +0000 Sat, 28 Jan 2023 18:36:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 28 Jan 2023 18:37:07 +0000 Sat, 28 Jan 2023 18:36:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 28 Jan 2023 18:37:07 +0000 Sat, 28 Jan 2023 18:36:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 28 Jan 2023 18:37:07 +0000 Sat, 28 Jan 2023 18:36:16 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.58.2
Hostname: multinode-052675
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871752Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871752Ki
pods: 110
System Info:
Machine ID: f1a46cb41c9d45969ef9bdf4a48d9b28
System UUID: 59b520aa-117e-4374-90f6-231e5d061c51
Boot ID: c2f3d462-b386-480a-bd1b-c0d90433fb30
Kernel Version: 5.15.0-1027-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-g84sq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m10s
kube-system coredns-787d4945fb-c28p8 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 3m38s
kube-system etcd-multinode-052675 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 3m50s
kube-system kindnet-8pkk5 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 3m38s
kube-system kube-apiserver-multinode-052675 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m51s
kube-system kube-controller-manager-multinode-052675 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m50s
kube-system kube-proxy-hz5nz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m38s
kube-system kube-scheduler-multinode-052675 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m50s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m36s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%!)(MISSING) 100m (1%!)(MISSING)
memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m36s kube-proxy
Normal Starting 3m51s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m51s kubelet Node multinode-052675 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m51s kubelet Node multinode-052675 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m51s kubelet Node multinode-052675 status is now: NodeHasSufficientPID
Normal NodeNotReady 3m50s kubelet Node multinode-052675 status is now: NodeNotReady
Normal NodeAllocatableEnforced 3m50s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m40s kubelet Node multinode-052675 status is now: NodeReady
Normal RegisteredNode 3m39s node-controller Node multinode-052675 event: Registered Node multinode-052675 in Controller
Name: multinode-052675-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-052675-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 28 Jan 2023 18:36:41 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-052675-m02
AcquireTime: <unset>
RenewTime: Sat, 28 Jan 2023 18:39:54 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 28 Jan 2023 18:37:12 +0000 Sat, 28 Jan 2023 18:36:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 28 Jan 2023 18:37:12 +0000 Sat, 28 Jan 2023 18:36:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 28 Jan 2023 18:37:12 +0000 Sat, 28 Jan 2023 18:36:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 28 Jan 2023 18:37:12 +0000 Sat, 28 Jan 2023 18:36:42 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.58.3
Hostname: multinode-052675-m02
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871752Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871752Ki
pods: 110
System Info:
Machine ID: f1a46cb41c9d45969ef9bdf4a48d9b28
System UUID: 31460efa-712b-41df-976f-e2d9604391d1
Boot ID: c2f3d462-b386-480a-bd1b-c0d90433fb30
Kernel Version: 5.15.0-1027-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-g4wvp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m10s
kube-system kindnet-x4b6m 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 3m15s
kube-system kube-proxy-8btnm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m15s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%!)(MISSING) 100m (1%!)(MISSING)
memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m12s kube-proxy
Normal Starting 3m15s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m15s (x2 over 3m15s) kubelet Node multinode-052675-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m15s (x2 over 3m15s) kubelet Node multinode-052675-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m15s (x2 over 3m15s) kubelet Node multinode-052675-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m15s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 3m14s node-controller Node multinode-052675-m02 event: Registered Node multinode-052675-m02 in Controller
Normal NodeReady 3m14s kubelet Node multinode-052675-m02 status is now: NodeReady
Name: multinode-052675-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-052675-m03
kubernetes.io/os=linux
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 28 Jan 2023 18:37:36 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-052675-m03
AcquireTime: <unset>
RenewTime: Sat, 28 Jan 2023 18:39:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 28 Jan 2023 18:37:46 +0000 Sat, 28 Jan 2023 18:37:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 28 Jan 2023 18:37:46 +0000 Sat, 28 Jan 2023 18:37:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 28 Jan 2023 18:37:46 +0000 Sat, 28 Jan 2023 18:37:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 28 Jan 2023 18:37:46 +0000 Sat, 28 Jan 2023 18:37:36 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.58.4
Hostname: multinode-052675-m03
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871752Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871752Ki
pods: 110
System Info:
Machine ID: f1a46cb41c9d45969ef9bdf4a48d9b28
System UUID: 8b3e074d-faf8-4a45-9c58-bdde0f022139
Boot ID: c2f3d462-b386-480a-bd1b-c0d90433fb30
Kernel Version: 5.15.0-1027-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.23
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.3.0/24
PodCIDRs: 10.244.3.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-ncz56 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 2m47s
kube-system kube-proxy-h7dv6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m47s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%!)(MISSING) 100m (1%!)(MISSING)
memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m6s kube-proxy
Normal Starting 2m44s kube-proxy
Normal Starting 2m48s kubelet Starting kubelet.
Normal NodeHasSufficientPID 2m47s (x2 over 2m47s) kubelet Node multinode-052675-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m47s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 2m47s kubelet Node multinode-052675-m03 status is now: NodeReady
Normal NodeHasNoDiskPressure 2m47s (x2 over 2m47s) kubelet Node multinode-052675-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m47s (x2 over 2m47s) kubelet Node multinode-052675-m03 status is now: NodeHasSufficientMemory
Normal Starting 2m27s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m26s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m20s (x7 over 2m26s) kubelet Node multinode-052675-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m20s (x7 over 2m26s) kubelet Node multinode-052675-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m20s (x7 over 2m26s) kubelet Node multinode-052675-m03 status is now: NodeHasSufficientPID
*
* ==> dmesg <==
* [ +0.008762] FS-Cache: O-key=[8] '8da00f0200000000'
[ +0.006277] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
[ +0.007955] FS-Cache: N-cookie d=00000000cd953fdb{9p.inode} n=00000000b97f67f9
[ +0.008737] FS-Cache: N-key=[8] '8da00f0200000000'
[ +3.705268] FS-Cache: Duplicate cookie detected
[ +0.004702] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
[ +0.006748] FS-Cache: O-cookie d=00000000cd953fdb{9p.inode} n=00000000395d31ad
[ +0.007360] FS-Cache: O-key=[8] '8ca00f0200000000'
[ +0.004951] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.006702] FS-Cache: N-cookie d=00000000cd953fdb{9p.inode} n=000000005727019b
[ +0.008767] FS-Cache: N-key=[8] '8ca00f0200000000'
[ +0.406755] FS-Cache: Duplicate cookie detected
[ +0.004703] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006860] FS-Cache: O-cookie d=00000000cd953fdb{9p.inode} n=000000009937b098
[ +0.007457] FS-Cache: O-key=[8] '9aa00f0200000000'
[ +0.004949] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.006560] FS-Cache: N-cookie d=00000000cd953fdb{9p.inode} n=00000000a225fe91
[ +0.007364] FS-Cache: N-key=[8] '9aa00f0200000000'
[ +2.415873] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 dd 3d 45 61 a1 08 06
[Jan28 18:29] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jan28 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a 41 fe 2b e6 75 08 06
[Jan28 18:34] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff fe 24 06 ac 67 52 08 06
*
* ==> etcd [c4215b5f1c76] <==
* {"level":"info","ts":"2023-01-28T18:36:00.408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
{"level":"info","ts":"2023-01-28T18:36:00.408Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
{"level":"info","ts":"2023-01-28T18:36:00.410Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
{"level":"info","ts":"2023-01-28T18:36:00.899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-052675 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-28T18:36:00.901Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T18:36:00.902Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T18:36:00.902Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-28T18:36:00.902Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-28T18:36:00.902Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
*
* ==> kernel <==
* 18:39:56 up 22 min, 0 users, load average: 0.39, 0.95, 0.90
Linux multinode-052675 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [a37732694916] <==
* I0128 18:36:02.873375 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0128 18:36:02.895523 1 controller.go:615] quota admission added evaluator for: namespaces
I0128 18:36:02.903629 1 shared_informer.go:280] Caches are synced for node_authorizer
I0128 18:36:02.929817 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0128 18:36:02.929861 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0128 18:36:02.930123 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0128 18:36:02.930143 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0128 18:36:02.930447 1 shared_informer.go:280] Caches are synced for configmaps
I0128 18:36:02.930552 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0128 18:36:03.522813 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0128 18:36:03.735798 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0128 18:36:03.739181 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0128 18:36:03.739197 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0128 18:36:04.165037 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0128 18:36:04.198137 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0128 18:36:04.308790 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0128 18:36:04.317286 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
I0128 18:36:04.318546 1 controller.go:615] quota admission added evaluator for: endpoints
I0128 18:36:04.322956 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0128 18:36:04.781709 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0128 18:36:05.773263 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0128 18:36:05.784279 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0128 18:36:05.795029 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0128 18:36:18.250522 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
I0128 18:36:18.499856 1 controller.go:615] quota admission added evaluator for: replicasets.apps
*
* ==> kube-controller-manager [90ac627c99fc] <==
* I0128 18:36:18.759346 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-c28p8"
I0128 18:36:19.123897 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
I0128 18:36:19.132020 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-nzbz8"
W0128 18:36:41.937397 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-052675-m02" does not exist
I0128 18:36:41.947229 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8btnm"
I0128 18:36:41.949315 1 range_allocator.go:372] Set node multinode-052675-m02 PodCIDR to [10.244.1.0/24]
I0128 18:36:41.949483 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-x4b6m"
W0128 18:36:42.550325 1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m02 node
W0128 18:36:42.699194 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-052675-m02. Assuming now as a timestamp.
I0128 18:36:42.699264 1 event.go:294] "Event occurred" object="multinode-052675-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-052675-m02 event: Registered Node multinode-052675-m02 in Controller"
I0128 18:36:46.637286 1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
I0128 18:36:46.647355 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-g4wvp"
I0128 18:36:46.651040 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-g84sq"
W0128 18:37:09.288908 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-052675-m03" does not exist
W0128 18:37:09.289000 1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m02 node
I0128 18:37:09.296110 1 range_allocator.go:372] Set node multinode-052675-m03 PodCIDR to [10.244.2.0/24]
I0128 18:37:09.298787 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ncz56"
I0128 18:37:09.298817 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h7dv6"
W0128 18:37:09.909007 1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m03 node
I0128 18:37:12.705570 1 event.go:294] "Event occurred" object="multinode-052675-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-052675-m03 event: Registered Node multinode-052675-m03 in Controller"
W0128 18:37:12.705587 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-052675-m03. Assuming now as a timestamp.
W0128 18:37:36.220558 1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m02 node
W0128 18:37:36.348964 1 topologycache.go:232] Can't get CPU or zone information for multinode-052675-m02 node
W0128 18:37:36.348991 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-052675-m03" does not exist
I0128 18:37:36.358605 1 range_allocator.go:372] Set node multinode-052675-m03 PodCIDR to [10.244.3.0/24]
*
* ==> kube-proxy [acc3adc5776a] <==
* I0128 18:36:20.495309 1 node.go:163] Successfully retrieved node IP: 192.168.58.2
I0128 18:36:20.495411 1 server_others.go:109] "Detected node IP" address="192.168.58.2"
I0128 18:36:20.495442 1 server_others.go:535] "Using iptables proxy"
I0128 18:36:20.573936 1 server_others.go:176] "Using iptables Proxier"
I0128 18:36:20.573966 1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0128 18:36:20.573974 1 server_others.go:184] "Creating dualStackProxier for iptables"
I0128 18:36:20.574002 1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0128 18:36:20.574037 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0128 18:36:20.574517 1 server.go:655] "Version info" version="v1.26.1"
I0128 18:36:20.574538 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0128 18:36:20.575067 1 config.go:317] "Starting service config controller"
I0128 18:36:20.575126 1 shared_informer.go:273] Waiting for caches to sync for service config
I0128 18:36:20.575129 1 config.go:444] "Starting node config controller"
I0128 18:36:20.575146 1 shared_informer.go:273] Waiting for caches to sync for node config
I0128 18:36:20.575128 1 config.go:226] "Starting endpoint slice config controller"
I0128 18:36:20.575158 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0128 18:36:20.676141 1 shared_informer.go:280] Caches are synced for service config
I0128 18:36:20.676215 1 shared_informer.go:280] Caches are synced for node config
I0128 18:36:20.676238 1 shared_informer.go:280] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [2e6c4095a993] <==
* W0128 18:36:02.879070 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0128 18:36:02.879083 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0128 18:36:02.879174 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0128 18:36:02.879188 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0128 18:36:02.879224 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0128 18:36:02.879240 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0128 18:36:02.879281 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0128 18:36:02.879303 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0128 18:36:02.879350 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0128 18:36:02.879378 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0128 18:36:02.879412 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0128 18:36:02.879427 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0128 18:36:02.879464 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0128 18:36:02.879479 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0128 18:36:02.879490 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0128 18:36:02.879503 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0128 18:36:03.698225 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0128 18:36:03.698272 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0128 18:36:03.781454 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0128 18:36:03.781488 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0128 18:36:03.828609 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0128 18:36:03.828643 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0128 18:36:03.831588 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0128 18:36:03.831616 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0128 18:36:04.274555 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Sat 2023-01-28 18:35:45 UTC, end at Sat 2023-01-28 18:39:57 UTC. --
Jan 28 18:36:23 multinode-052675 kubelet[2333]: E0128 18:36:23.817000 2333 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"5e25a79d1a60cc3b21d7a69a05711159b888d24fad0bb0774957e77f3b710441\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
Jan 28 18:36:23 multinode-052675 kubelet[2333]: E0128 18:36:23.817082 2333 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"5e25a79d1a60cc3b21d7a69a05711159b888d24fad0bb0774957e77f3b710441\\\" network for pod \\\"coredns-787d4945fb-c28p8\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-c28p8_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-c28p8" podUID=d87aee89-96d2-4627-a7ec-00a4d69653aa
Jan 28 18:36:23 multinode-052675 kubelet[2333]: I0128 18:36:23.920689 2333 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ed0eb028-4b66-4332-b5b0-368ffd3e7e15 path="/var/lib/kubelet/pods/ed0eb028-4b66-4332-b5b0-368ffd3e7e15/volumes"
Jan 28 18:36:24 multinode-052675 kubelet[2333]: I0128 18:36:24.557757 2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e25a79d1a60cc3b21d7a69a05711159b888d24fad0bb0774957e77f3b710441"
Jan 28 18:36:24 multinode-052675 kubelet[2333]: E0128 18:36:24.853588 2333 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\""
Jan 28 18:36:24 multinode-052675 kubelet[2333]: E0128 18:36:24.853658 2333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
Jan 28 18:36:24 multinode-052675 kubelet[2333]: E0128 18:36:24.853697 2333 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
Jan 28 18:36:24 multinode-052675 kubelet[2333]: E0128 18:36:24.853772 2333 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e\\\" network for pod \\\"coredns-787d4945fb-c28p8\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-c28p8_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-c28p8" podUID=d87aee89-96d2-4627-a7ec-00a4d69653aa
Jan 28 18:36:25 multinode-052675 kubelet[2333]: I0128 18:36:25.572409 2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676181226bb137a5823c453029d084213f81abc5ecd6e563653172d4a868768e"
Jan 28 18:36:25 multinode-052675 kubelet[2333]: E0128 18:36:25.864582 2333 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\""
Jan 28 18:36:25 multinode-052675 kubelet[2333]: E0128 18:36:25.864650 2333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
Jan 28 18:36:25 multinode-052675 kubelet[2333]: E0128 18:36:25.864678 2333 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
Jan 28 18:36:25 multinode-052675 kubelet[2333]: E0128 18:36:25.864741 2333 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469\\\" network for pod \\\"coredns-787d4945fb-c28p8\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-c28p8_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-c28p8" podUID=d87aee89-96d2-4627-a7ec-00a4d69653aa
Jan 28 18:36:26 multinode-052675 kubelet[2333]: I0128 18:36:26.388050 2333 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jan 28 18:36:26 multinode-052675 kubelet[2333]: I0128 18:36:26.388665 2333 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jan 28 18:36:26 multinode-052675 kubelet[2333]: I0128 18:36:26.586132 2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f66fc3eac40c8c6fb3c4eae9927b574f2695d3e22a92f3558999c17cd29bf469"
Jan 28 18:36:26 multinode-052675 kubelet[2333]: E0128 18:36:26.890175 2333 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\""
Jan 28 18:36:26 multinode-052675 kubelet[2333]: E0128 18:36:26.890253 2333 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
Jan 28 18:36:26 multinode-052675 kubelet[2333]: E0128 18:36:26.890289 2333 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc\" network for pod \"coredns-787d4945fb-c28p8\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-c28p8_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-c28p8"
Jan 28 18:36:26 multinode-052675 kubelet[2333]: E0128 18:36:26.890376 2333 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-c28p8_kube-system(d87aee89-96d2-4627-a7ec-00a4d69653aa)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc\\\" network for pod \\\"coredns-787d4945fb-c28p8\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-c28p8_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-c28p8" podUID=d87aee89-96d2-4627-a7ec-00a4d69653aa
Jan 28 18:36:27 multinode-052675 kubelet[2333]: I0128 18:36:27.602763 2333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d582f1f003322473a6ab183c3c0ec724c61fd0495c7bea6cacad2a1c65485cc"
Jan 28 18:36:28 multinode-052675 kubelet[2333]: I0128 18:36:28.641347 2333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-c28p8" podStartSLOduration=10.641292152 pod.CreationTimestamp="2023-01-28 18:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 18:36:28.641089485 +0000 UTC m=+22.889728002" watchObservedRunningTime="2023-01-28 18:36:28.641292152 +0000 UTC m=+22.889930687"
Jan 28 18:36:46 multinode-052675 kubelet[2333]: I0128 18:36:46.657757 2333 topology_manager.go:210] "Topology Admit Handler"
Jan 28 18:36:46 multinode-052675 kubelet[2333]: I0128 18:36:46.827090 2333 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zszks\" (UniqueName: \"kubernetes.io/projected/07aca5c2-c0d3-4c53-92e8-47705123ffd3-kube-api-access-zszks\") pod \"busybox-6b86dd6d48-g84sq\" (UID: \"07aca5c2-c0d3-4c53-92e8-47705123ffd3\") " pod="default/busybox-6b86dd6d48-g84sq"
Jan 28 18:36:48 multinode-052675 kubelet[2333]: I0128 18:36:48.786444 2333 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-g84sq" podStartSLOduration=-9.223372034068388e+09 pod.CreationTimestamp="2023-01-28 18:36:46 +0000 UTC" firstStartedPulling="2023-01-28 18:36:47.242166223 +0000 UTC m=+41.490804732" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 18:36:48.786118931 +0000 UTC m=+43.034757446" watchObservedRunningTime="2023-01-28 18:36:48.786388476 +0000 UTC m=+43.035026992"
*
* ==> storage-provisioner [aeb357b4e209] <==
* I0128 18:36:21.407911 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0128 18:36:21.417104 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0128 18:36:21.417185 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0128 18:36:21.480316 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0128 18:36:21.480478 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b79d23c8-285b-4959-abf4-ca24577373ed", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-052675_4d6fd3f0-4271-421b-bde5-7ca41a58c0d6 became leader
I0128 18:36:21.480523 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-052675_4d6fd3f0-4271-421b-bde5-7ca41a58c0d6!
I0128 18:36:21.581345 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-052675_4d6fd3f0-4271-421b-bde5-7ca41a58c0d6!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-052675 -n multinode-052675
helpers_test.go:261: (dbg) Run: kubectl --context multinode-052675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (149.10s)