=== RUN TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run: docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run: out/minikube-linux-amd64 -p multinode-585561 node start m03 --alsologtostderr
E0124 17:48:16.866419 10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
E0124 17:48:44.550305 10126 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/ingress-addon-legacy-933654/client.crt: no such file or directory
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-585561 node start m03 --alsologtostderr: exit status 80 (2m26.125838989s)
-- stdout --
* Starting worker node multinode-585561-m03 in cluster multinode-585561
* Pulling base image ...
* Restarting existing docker container for "multinode-585561-m03" ...
* Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0124 17:47:18.652035 147217 out.go:296] Setting OutFile to fd 1 ...
I0124 17:47:18.652212 147217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 17:47:18.652224 147217 out.go:309] Setting ErrFile to fd 2...
I0124 17:47:18.652231 147217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 17:47:18.652405 147217 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
I0124 17:47:18.652748 147217 mustload.go:65] Loading cluster: multinode-585561
I0124 17:47:18.653093 147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:18.653518 147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
W0124 17:47:18.677624 147217 host.go:58] "multinode-585561-m03" host status: Stopped
I0124 17:47:18.681146 147217 out.go:177] * Starting worker node multinode-585561-m03 in cluster multinode-585561
I0124 17:47:18.683581 147217 cache.go:120] Beginning downloading kic base image for docker with docker
I0124 17:47:18.685152 147217 out.go:177] * Pulling base image ...
I0124 17:47:18.686621 147217 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0124 17:47:18.686660 147217 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0124 17:47:18.686662 147217 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
I0124 17:47:18.686688 147217 cache.go:57] Caching tarball of preloaded images
I0124 17:47:18.686804 147217 preload.go:174] Found /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0124 17:47:18.686816 147217 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0124 17:47:18.686914 147217 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:47:18.710140 147217 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
I0124 17:47:18.710170 147217 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
I0124 17:47:18.710189 147217 cache.go:193] Successfully downloaded all kic artifacts
I0124 17:47:18.710231 147217 start.go:364] acquiring machines lock for multinode-585561-m03: {Name:mk1e51c84cfdfd4bc99cc8c668c0ed893d777e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 17:47:18.710304 147217 start.go:368] acquired machines lock for "multinode-585561-m03" in 50.145µs
I0124 17:47:18.710329 147217 start.go:96] Skipping create...Using existing machine configuration
I0124 17:47:18.710342 147217 fix.go:55] fixHost starting: m03
I0124 17:47:18.710576 147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
I0124 17:47:18.735820 147217 fix.go:103] recreateIfNeeded on multinode-585561-m03: state=Stopped err=<nil>
W0124 17:47:18.735860 147217 fix.go:129] unexpected machine state, will restart: <nil>
I0124 17:47:18.738309 147217 out.go:177] * Restarting existing docker container for "multinode-585561-m03" ...
I0124 17:47:18.740196 147217 cli_runner.go:164] Run: docker start multinode-585561-m03
I0124 17:47:19.100258 147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
I0124 17:47:19.126249 147217 kic.go:426] container "multinode-585561-m03" state is running.
I0124 17:47:19.126717 147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:19.151082 147217 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:47:19.151317 147217 machine.go:88] provisioning docker machine ...
I0124 17:47:19.151359 147217 ubuntu.go:169] provisioning hostname "multinode-585561-m03"
I0124 17:47:19.151414 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:19.175734 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:19.175896 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:19.175917 147217 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-585561-m03 && echo "multinode-585561-m03" | sudo tee /etc/hostname
I0124 17:47:19.176544 147217 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38658->127.0.0.1:32867: read: connection reset by peer
I0124 17:47:22.321766 147217 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-585561-m03
I0124 17:47:22.321857 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.345338 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:22.345510 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:22.345539 147217 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-585561-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-585561-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-585561-m03' | sudo tee -a /etc/hosts;
fi
fi
I0124 17:47:22.476699 147217 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0124 17:47:22.476724 147217 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3637/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3637/.minikube}
I0124 17:47:22.476757 147217 ubuntu.go:177] setting up certificates
I0124 17:47:22.476768 147217 provision.go:83] configureAuth start
I0124 17:47:22.476824 147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:22.501758 147217 provision.go:138] copyHostCerts
I0124 17:47:22.501830 147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem, removing ...
I0124 17:47:22.501842 147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
I0124 17:47:22.501907 147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem (1078 bytes)
I0124 17:47:22.501995 147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem, removing ...
I0124 17:47:22.502003 147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
I0124 17:47:22.502026 147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem (1123 bytes)
I0124 17:47:22.502074 147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem, removing ...
I0124 17:47:22.502081 147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
I0124 17:47:22.502100 147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem (1679 bytes)
I0124 17:47:22.502142 147217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem org=jenkins.multinode-585561-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-585561-m03]
I0124 17:47:22.668018 147217 provision.go:172] copyRemoteCerts
I0124 17:47:22.668080 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0124 17:47:22.668111 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.692791 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:22.788105 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0124 17:47:22.806070 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0124 17:47:22.823781 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0124 17:47:22.842616 147217 provision.go:86] duration metric: configureAuth took 365.836584ms
I0124 17:47:22.842641 147217 ubuntu.go:193] setting minikube options for container-runtime
I0124 17:47:22.842831 147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:22.842893 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.867911 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:22.868086 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:22.868102 147217 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0124 17:47:23.001163 147217 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0124 17:47:23.001193 147217 ubuntu.go:71] root file system type: overlay
I0124 17:47:23.001427 147217 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0124 17:47:23.001498 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.025778 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:23.025966 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:23.026067 147217 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0124 17:47:23.166292 147217 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0124 17:47:23.166375 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.190853 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:23.190998 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:23.191016 147217 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0124 17:47:23.324396 147217 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0124 17:47:23.324433 147217 machine.go:91] provisioned docker machine in 4.173084631s
I0124 17:47:23.324446 147217 start.go:300] post-start starting for "multinode-585561-m03" (driver="docker")
I0124 17:47:23.324454 147217 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0124 17:47:23.324515 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0124 17:47:23.324558 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.348407 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.440010 147217 ssh_runner.go:195] Run: cat /etc/os-release
I0124 17:47:23.442722 147217 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0124 17:47:23.442745 147217 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0124 17:47:23.442754 147217 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0124 17:47:23.442780 147217 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0124 17:47:23.442789 147217 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/addons for local assets ...
I0124 17:47:23.442840 147217 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/files for local assets ...
I0124 17:47:23.442906 147217 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> 101262.pem in /etc/ssl/certs
I0124 17:47:23.442974 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0124 17:47:23.449741 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:47:23.468022 147217 start.go:303] post-start completed in 143.560276ms
I0124 17:47:23.468094 147217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0124 17:47:23.468134 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.494231 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.585067 147217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0124 17:47:23.588891 147217 fix.go:57] fixHost completed within 4.878541027s
I0124 17:47:23.588914 147217 start.go:83] releasing machines lock for "multinode-585561-m03", held for 4.878596131s
I0124 17:47:23.588982 147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:23.612903 147217 ssh_runner.go:195] Run: systemctl --version
I0124 17:47:23.612946 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.612959 147217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0124 17:47:23.613044 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.638647 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.638965 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.755965 147217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0124 17:47:23.760398 147217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0124 17:47:23.776928 147217 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0124 17:47:23.777078 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0124 17:47:23.784224 147217 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0124 17:47:23.797692 147217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0124 17:47:23.804694 147217 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0124 17:47:23.804719 147217 start.go:472] detecting cgroup driver to use...
I0124 17:47:23.804747 147217 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:47:23.804879 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:47:23.817954 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0124 17:47:23.826669 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0124 17:47:23.835181 147217 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0124 17:47:23.835257 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0124 17:47:23.844020 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:47:23.852138 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0124 17:47:23.860203 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:47:23.868592 147217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0124 17:47:23.875995 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0124 17:47:23.884137 147217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0124 17:47:23.890671 147217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0124 17:47:23.897543 147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:23.988579 147217 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0124 17:47:24.071300 147217 start.go:472] detecting cgroup driver to use...
I0124 17:47:24.071348 147217 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:47:24.071391 147217 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0124 17:47:24.081973 147217 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0124 17:47:24.082024 147217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0124 17:47:24.092267 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:47:24.106914 147217 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0124 17:47:24.194206 147217 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0124 17:47:24.287286 147217 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0124 17:47:24.287322 147217 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0124 17:47:24.301177 147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:24.385805 147217 ssh_runner.go:195] Run: sudo systemctl restart docker
I0124 17:47:24.634111 147217 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:47:24.719149 147217 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0124 17:47:24.799006 147217 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:47:24.876798 147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:24.951113 147217 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0124 17:47:24.966517 147217 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0124 17:47:24.966575 147217 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0124 17:47:24.970021 147217 start.go:540] Will wait 60s for crictl version
I0124 17:47:24.970073 147217 ssh_runner.go:195] Run: which crictl
I0124 17:47:24.973057 147217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0124 17:47:25.051636 147217 start.go:556] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.22
RuntimeApiVersion: v1alpha2
I0124 17:47:25.051708 147217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:47:25.078629 147217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:47:25.108359 147217 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
I0124 17:47:25.108449 147217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0124 17:47:25.208629 147217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-24 17:47:25.130756489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0124 17:47:25.208751 147217 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0124 17:47:25.231655 147217 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0124 17:47:25.235105 147217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:47:25.244533 147217 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561 for IP: 192.168.58.4
I0124 17:47:25.244572 147217 certs.go:186] acquiring lock for shared ca certs: {Name:mk1dc62d6b43bec706eb6ba5de0c4f61edad78b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:47:25.244721 147217 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key
I0124 17:47:25.244772 147217 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key
I0124 17:47:25.244875 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem (1338 bytes)
W0124 17:47:25.244914 147217 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126_empty.pem, impossibly tiny 0 bytes
I0124 17:47:25.244927 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem (1675 bytes)
I0124 17:47:25.244963 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem (1078 bytes)
I0124 17:47:25.244998 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem (1123 bytes)
I0124 17:47:25.245039 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem (1679 bytes)
I0124 17:47:25.245092 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:47:25.245636 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0124 17:47:25.263205 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0124 17:47:25.279976 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0124 17:47:25.297082 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0124 17:47:25.314430 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /usr/share/ca-certificates/101262.pem (1708 bytes)
I0124 17:47:25.331786 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0124 17:47:25.349013 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem --> /usr/share/ca-certificates/10126.pem (1338 bytes)
I0124 17:47:25.366363 147217 ssh_runner.go:195] Run: openssl version
I0124 17:47:25.371190 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101262.pem && ln -fs /usr/share/ca-certificates/101262.pem /etc/ssl/certs/101262.pem"
I0124 17:47:25.378805 147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101262.pem
I0124 17:47:25.382155 147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
I0124 17:47:25.382196 147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101262.pem
I0124 17:47:25.386851 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101262.pem /etc/ssl/certs/3ec20f2e.0"
I0124 17:47:25.394234 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0124 17:47:25.401432 147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.404313 147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.404366 147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.409124 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0124 17:47:25.416025 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10126.pem && ln -fs /usr/share/ca-certificates/10126.pem /etc/ssl/certs/10126.pem"
I0124 17:47:25.423551 147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10126.pem
I0124 17:47:25.426645 147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
I0124 17:47:25.426699 147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10126.pem
I0124 17:47:25.431581 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10126.pem /etc/ssl/certs/51391683.0"
I0124 17:47:25.438759 147217 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0124 17:47:25.506238 147217 cni.go:84] Creating CNI manager for ""
I0124 17:47:25.506261 147217 cni.go:136] 3 nodes found, recommending kindnet
I0124 17:47:25.506269 147217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0124 17:47:25.506288 147217 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-585561 NodeName:multinode-585561-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0124 17:47:25.506477 147217 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-585561-m03"
kubeletExtraArgs:
node-ip: 192.168.58.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0124 17:47:25.506583 147217 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-585561-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0124 17:47:25.506647 147217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0124 17:47:25.513891 147217 binaries.go:44] Found k8s binaries, skipping transfer
I0124 17:47:25.513959 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0124 17:47:25.520827 147217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0124 17:47:25.533248 147217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0124 17:47:25.545607 147217 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0124 17:47:25.548534 147217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:47:25.557708 147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.557886 147217 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
I0124 17:47:25.557955 147217 addons.go:65] Setting storage-provisioner=true in profile "multinode-585561"
I0124 17:47:25.557964 147217 addons.go:65] Setting default-storageclass=true in profile "multinode-585561"
I0124 17:47:25.557974 147217 addons.go:227] Setting addon storage-provisioner=true in "multinode-585561"
W0124 17:47:25.557982 147217 addons.go:236] addon storage-provisioner should already be in state true
I0124 17:47:25.557994 147217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-585561"
I0124 17:47:25.558054 147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.557994 147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:25.557988 147217 start.go:288] JoinCluster: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metall
b:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0124 17:47:25.558142 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0124 17:47:25.558192 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.558320 147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.558508 147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.588701 147217 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:47:25.586744 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.590558 147217 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0124 17:47:25.590579 147217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0124 17:47:25.590626 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.600319 147217 addons.go:227] Setting addon default-storageclass=true in "multinode-585561"
W0124 17:47:25.600342 147217 addons.go:236] addon default-storageclass should already be in state true
I0124 17:47:25.600364 147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.600751 147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.620738 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.627555 147217 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0124 17:47:25.627579 147217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0124 17:47:25.627621 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.653298 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.723057 147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0124 17:47:25.741816 147217 start.go:301] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:25.741868 147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.742149 147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-585561-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
I0124 17:47:25.742194 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.759614 147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0124 17:47:25.771204 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:26.089951 147217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0124 17:47:26.091518 147217 addons.go:488] enableAddons completed in 533.631743ms
I0124 17:47:26.167061 147217 node.go:109] successfully drained node "m03"
I0124 17:47:26.171811 147217 node.go:125] successfully deleted node "m03"
I0124 17:47:26.171837 147217 start.go:305] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:26.171858 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:26.171877 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:26.374254 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:26.210765 1439 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:26.374279 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:26.374290 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:26.412768 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:26.412814 147217 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:26.210765 1439 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.460571 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:37.460619 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:37.616489 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:37.498525 1673 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.616527 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:37.616543 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:37.653897 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.653931 147217 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:37.498525 1673 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.262453 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:59.262504 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:59.420123 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:59.298142 2145 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.420152 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:59.420168 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:59.459780 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.459822 147217 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:59.298142 2145 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.662843 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:48:25.662895 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:48:25.817871 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:48:25.697923 2442 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.817906 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:48:25.817920 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:48:25.857241 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.857277 147217 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:48:25.697923 2442 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.505667 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:48:57.505727 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:48:57.660095 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:48:57.541681 2747 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.660123 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:48:57.660137 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:48:57.698398 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.698425 147217 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:48:57.541681 2747 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.508544 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:49:44.508609 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:49:44.662615 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:49:44.543023 3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.662638 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:49:44.662652 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:49:44.700290 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.700323 147217 start.go:290] JoinCluster complete in 2m19.142337617s
I0124 17:49:44.703679 147217 out.go:177]
W0124 17:49:44.705552 147217 out.go:239] X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:49:44.543023 3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:49:44.543023 3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
W0124 17:49:44.705572 147217 out.go:239] *
*
W0124 17:49:44.707691 147217 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0124 17:49:44.709648 147217 out.go:177]
** /stderr **
multinode_test.go:254: I0124 17:47:18.652035 147217 out.go:296] Setting OutFile to fd 1 ...
I0124 17:47:18.652212 147217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 17:47:18.652224 147217 out.go:309] Setting ErrFile to fd 2...
I0124 17:47:18.652231 147217 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 17:47:18.652405 147217 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
I0124 17:47:18.652748 147217 mustload.go:65] Loading cluster: multinode-585561
I0124 17:47:18.653093 147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:18.653518 147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
W0124 17:47:18.677624 147217 host.go:58] "multinode-585561-m03" host status: Stopped
I0124 17:47:18.681146 147217 out.go:177] * Starting worker node multinode-585561-m03 in cluster multinode-585561
I0124 17:47:18.683581 147217 cache.go:120] Beginning downloading kic base image for docker with docker
I0124 17:47:18.685152 147217 out.go:177] * Pulling base image ...
I0124 17:47:18.686621 147217 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0124 17:47:18.686660 147217 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0124 17:47:18.686662 147217 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
I0124 17:47:18.686688 147217 cache.go:57] Caching tarball of preloaded images
I0124 17:47:18.686804 147217 preload.go:174] Found /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0124 17:47:18.686816 147217 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0124 17:47:18.686914 147217 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:47:18.710140 147217 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
I0124 17:47:18.710170 147217 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
I0124 17:47:18.710189 147217 cache.go:193] Successfully downloaded all kic artifacts
I0124 17:47:18.710231 147217 start.go:364] acquiring machines lock for multinode-585561-m03: {Name:mk1e51c84cfdfd4bc99cc8c668c0ed893d777e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 17:47:18.710304 147217 start.go:368] acquired machines lock for "multinode-585561-m03" in 50.145µs
I0124 17:47:18.710329 147217 start.go:96] Skipping create...Using existing machine configuration
I0124 17:47:18.710342 147217 fix.go:55] fixHost starting: m03
I0124 17:47:18.710576 147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
I0124 17:47:18.735820 147217 fix.go:103] recreateIfNeeded on multinode-585561-m03: state=Stopped err=<nil>
W0124 17:47:18.735860 147217 fix.go:129] unexpected machine state, will restart: <nil>
I0124 17:47:18.738309 147217 out.go:177] * Restarting existing docker container for "multinode-585561-m03" ...
I0124 17:47:18.740196 147217 cli_runner.go:164] Run: docker start multinode-585561-m03
I0124 17:47:19.100258 147217 cli_runner.go:164] Run: docker container inspect multinode-585561-m03 --format={{.State.Status}}
I0124 17:47:19.126249 147217 kic.go:426] container "multinode-585561-m03" state is running.
I0124 17:47:19.126717 147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:19.151082 147217 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:47:19.151317 147217 machine.go:88] provisioning docker machine ...
I0124 17:47:19.151359 147217 ubuntu.go:169] provisioning hostname "multinode-585561-m03"
I0124 17:47:19.151414 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:19.175734 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:19.175896 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:19.175917 147217 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-585561-m03 && echo "multinode-585561-m03" | sudo tee /etc/hostname
I0124 17:47:19.176544 147217 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38658->127.0.0.1:32867: read: connection reset by peer
I0124 17:47:22.321766 147217 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-585561-m03
I0124 17:47:22.321857 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.345338 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:22.345510 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:22.345539 147217 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-585561-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-585561-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-585561-m03' | sudo tee -a /etc/hosts;
fi
fi
I0124 17:47:22.476699 147217 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0124 17:47:22.476724 147217 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3637/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3637/.minikube}
I0124 17:47:22.476757 147217 ubuntu.go:177] setting up certificates
I0124 17:47:22.476768 147217 provision.go:83] configureAuth start
I0124 17:47:22.476824 147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:22.501758 147217 provision.go:138] copyHostCerts
I0124 17:47:22.501830 147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem, removing ...
I0124 17:47:22.501842 147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
I0124 17:47:22.501907 147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem (1078 bytes)
I0124 17:47:22.501995 147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem, removing ...
I0124 17:47:22.502003 147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
I0124 17:47:22.502026 147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem (1123 bytes)
I0124 17:47:22.502074 147217 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem, removing ...
I0124 17:47:22.502081 147217 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
I0124 17:47:22.502100 147217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem (1679 bytes)
I0124 17:47:22.502142 147217 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem org=jenkins.multinode-585561-m03 san=[192.168.58.4 127.0.0.1 localhost 127.0.0.1 minikube multinode-585561-m03]
I0124 17:47:22.668018 147217 provision.go:172] copyRemoteCerts
I0124 17:47:22.668080 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0124 17:47:22.668111 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.692791 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:22.788105 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0124 17:47:22.806070 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0124 17:47:22.823781 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0124 17:47:22.842616 147217 provision.go:86] duration metric: configureAuth took 365.836584ms
I0124 17:47:22.842641 147217 ubuntu.go:193] setting minikube options for container-runtime
I0124 17:47:22.842831 147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:22.842893 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:22.867911 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:22.868086 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:22.868102 147217 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0124 17:47:23.001163 147217 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0124 17:47:23.001193 147217 ubuntu.go:71] root file system type: overlay
I0124 17:47:23.001427 147217 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0124 17:47:23.001498 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.025778 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:23.025966 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:23.026067 147217 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0124 17:47:23.166292 147217 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0124 17:47:23.166375 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.190853 147217 main.go:141] libmachine: Using SSH client type: native
I0124 17:47:23.190998 147217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32867 <nil> <nil>}
I0124 17:47:23.191016 147217 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0124 17:47:23.324396 147217 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0124 17:47:23.324433 147217 machine.go:91] provisioned docker machine in 4.173084631s
I0124 17:47:23.324446 147217 start.go:300] post-start starting for "multinode-585561-m03" (driver="docker")
I0124 17:47:23.324454 147217 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0124 17:47:23.324515 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0124 17:47:23.324558 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.348407 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.440010 147217 ssh_runner.go:195] Run: cat /etc/os-release
I0124 17:47:23.442722 147217 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0124 17:47:23.442745 147217 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0124 17:47:23.442754 147217 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0124 17:47:23.442780 147217 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0124 17:47:23.442789 147217 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/addons for local assets ...
I0124 17:47:23.442840 147217 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/files for local assets ...
I0124 17:47:23.442906 147217 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> 101262.pem in /etc/ssl/certs
I0124 17:47:23.442974 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0124 17:47:23.449741 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:47:23.468022 147217 start.go:303] post-start completed in 143.560276ms
I0124 17:47:23.468094 147217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0124 17:47:23.468134 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.494231 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.585067 147217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0124 17:47:23.588891 147217 fix.go:57] fixHost completed within 4.878541027s
I0124 17:47:23.588914 147217 start.go:83] releasing machines lock for "multinode-585561-m03", held for 4.878596131s
I0124 17:47:23.588982 147217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m03
I0124 17:47:23.612903 147217 ssh_runner.go:195] Run: systemctl --version
I0124 17:47:23.612946 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.612959 147217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0124 17:47:23.613044 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m03
I0124 17:47:23.638647 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.638965 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32867 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m03/id_rsa Username:docker}
I0124 17:47:23.755965 147217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0124 17:47:23.760398 147217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0124 17:47:23.776928 147217 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0124 17:47:23.777078 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0124 17:47:23.784224 147217 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0124 17:47:23.797692 147217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0124 17:47:23.804694 147217 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0124 17:47:23.804719 147217 start.go:472] detecting cgroup driver to use...
I0124 17:47:23.804747 147217 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:47:23.804879 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:47:23.817954 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0124 17:47:23.826669 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0124 17:47:23.835181 147217 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0124 17:47:23.835257 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0124 17:47:23.844020 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:47:23.852138 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0124 17:47:23.860203 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:47:23.868592 147217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0124 17:47:23.875995 147217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0124 17:47:23.884137 147217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0124 17:47:23.890671 147217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0124 17:47:23.897543 147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:23.988579 147217 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0124 17:47:24.071300 147217 start.go:472] detecting cgroup driver to use...
I0124 17:47:24.071348 147217 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:47:24.071391 147217 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0124 17:47:24.081973 147217 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0124 17:47:24.082024 147217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0124 17:47:24.092267 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:47:24.106914 147217 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0124 17:47:24.194206 147217 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0124 17:47:24.287286 147217 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0124 17:47:24.287322 147217 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0124 17:47:24.301177 147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:24.385805 147217 ssh_runner.go:195] Run: sudo systemctl restart docker
I0124 17:47:24.634111 147217 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:47:24.719149 147217 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0124 17:47:24.799006 147217 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:47:24.876798 147217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:47:24.951113 147217 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0124 17:47:24.966517 147217 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0124 17:47:24.966575 147217 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0124 17:47:24.970021 147217 start.go:540] Will wait 60s for crictl version
I0124 17:47:24.970073 147217 ssh_runner.go:195] Run: which crictl
I0124 17:47:24.973057 147217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0124 17:47:25.051636 147217 start.go:556] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.22
RuntimeApiVersion: v1alpha2
I0124 17:47:25.051708 147217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:47:25.078629 147217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:47:25.108359 147217 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
I0124 17:47:25.108449 147217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0124 17:47:25.208629 147217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2023-01-24 17:47:25.130756489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0124 17:47:25.208751 147217 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0124 17:47:25.231655 147217 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0124 17:47:25.235105 147217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:47:25.244533 147217 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561 for IP: 192.168.58.4
I0124 17:47:25.244572 147217 certs.go:186] acquiring lock for shared ca certs: {Name:mk1dc62d6b43bec706eb6ba5de0c4f61edad78b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:47:25.244721 147217 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key
I0124 17:47:25.244772 147217 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key
I0124 17:47:25.244875 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem (1338 bytes)
W0124 17:47:25.244914 147217 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126_empty.pem, impossibly tiny 0 bytes
I0124 17:47:25.244927 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem (1675 bytes)
I0124 17:47:25.244963 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem (1078 bytes)
I0124 17:47:25.244998 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem (1123 bytes)
I0124 17:47:25.245039 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem (1679 bytes)
I0124 17:47:25.245092 147217 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:47:25.245636 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0124 17:47:25.263205 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0124 17:47:25.279976 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0124 17:47:25.297082 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0124 17:47:25.314430 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /usr/share/ca-certificates/101262.pem (1708 bytes)
I0124 17:47:25.331786 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0124 17:47:25.349013 147217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem --> /usr/share/ca-certificates/10126.pem (1338 bytes)
I0124 17:47:25.366363 147217 ssh_runner.go:195] Run: openssl version
I0124 17:47:25.371190 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101262.pem && ln -fs /usr/share/ca-certificates/101262.pem /etc/ssl/certs/101262.pem"
I0124 17:47:25.378805 147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101262.pem
I0124 17:47:25.382155 147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
I0124 17:47:25.382196 147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101262.pem
I0124 17:47:25.386851 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101262.pem /etc/ssl/certs/3ec20f2e.0"
I0124 17:47:25.394234 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0124 17:47:25.401432 147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.404313 147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.404366 147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0124 17:47:25.409124 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0124 17:47:25.416025 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10126.pem && ln -fs /usr/share/ca-certificates/10126.pem /etc/ssl/certs/10126.pem"
I0124 17:47:25.423551 147217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10126.pem
I0124 17:47:25.426645 147217 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
I0124 17:47:25.426699 147217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10126.pem
I0124 17:47:25.431581 147217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10126.pem /etc/ssl/certs/51391683.0"
I0124 17:47:25.438759 147217 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0124 17:47:25.506238 147217 cni.go:84] Creating CNI manager for ""
I0124 17:47:25.506261 147217 cni.go:136] 3 nodes found, recommending kindnet
I0124 17:47:25.506269 147217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0124 17:47:25.506288 147217 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.4 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-585561 NodeName:multinode-585561-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0124 17:47:25.506477 147217 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.4
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-585561-m03"
kubeletExtraArgs:
node-ip: 192.168.58.4
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0124 17:47:25.506583 147217 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-585561-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.4
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0124 17:47:25.506647 147217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0124 17:47:25.513891 147217 binaries.go:44] Found k8s binaries, skipping transfer
I0124 17:47:25.513959 147217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0124 17:47:25.520827 147217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0124 17:47:25.533248 147217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0124 17:47:25.545607 147217 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0124 17:47:25.548534 147217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:47:25.557708 147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.557886 147217 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
I0124 17:47:25.557955 147217 addons.go:65] Setting storage-provisioner=true in profile "multinode-585561"
I0124 17:47:25.557964 147217 addons.go:65] Setting default-storageclass=true in profile "multinode-585561"
I0124 17:47:25.557974 147217 addons.go:227] Setting addon storage-provisioner=true in "multinode-585561"
W0124 17:47:25.557982 147217 addons.go:236] addon storage-provisioner should already be in state true
I0124 17:47:25.557994 147217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-585561"
I0124 17:47:25.558054 147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.557994 147217 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:47:25.557988 147217 start.go:288] JoinCluster: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb
:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0124 17:47:25.558142 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0124 17:47:25.558192 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.558320 147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.558508 147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.588701 147217 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:47:25.586744 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.590558 147217 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0124 17:47:25.590579 147217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0124 17:47:25.590626 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.600319 147217 addons.go:227] Setting addon default-storageclass=true in "multinode-585561"
W0124 17:47:25.600342 147217 addons.go:236] addon default-storageclass should already be in state true
I0124 17:47:25.600364 147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.600751 147217 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:47:25.620738 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.627555 147217 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0124 17:47:25.627579 147217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0124 17:47:25.627621 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.653298 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:25.723057 147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0124 17:47:25.741816 147217 start.go:301] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:25.741868 147217 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:47:25.742149 147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl drain multinode-585561-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
I0124 17:47:25.742194 147217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:47:25.759614 147217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0124 17:47:25.771204 147217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:47:26.089951 147217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0124 17:47:26.091518 147217 addons.go:488] enableAddons completed in 533.631743ms
I0124 17:47:26.167061 147217 node.go:109] successfully drained node "m03"
I0124 17:47:26.171811 147217 node.go:125] successfully deleted node "m03"
I0124 17:47:26.171837 147217 start.go:305] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:26.171858 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:26.171877 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:26.374254 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:26.210765 1439 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:26.374279 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:26.374290 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:26.412768 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:26.412814 147217 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:26.210765 1439 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.460571 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:37.460619 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:37.616489 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:37.498525 1673 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.616527 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:37.616543 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:37.653897 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:37.653931 147217 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:37.498525 1673 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.262453 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:47:59.262504 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:47:59.420123 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:59.298142 2145 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.420152 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:47:59.420168 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:47:59.459780 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:47:59.459822 147217 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:47:59.298142 2145 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.662843 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:48:25.662895 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:48:25.817871 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:48:25.697923 2442 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.817906 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:48:25.817920 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:48:25.857241 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:25.857277 147217 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:48:25.697923 2442 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.505667 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:48:57.505727 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:48:57.660095 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:48:57.541681 2747 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.660123 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:48:57.660137 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:48:57.698398 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:48:57.698425 147217 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:48:57.541681 2747 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.508544 147217 start.go:309] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime: ControlPlane:false Worker:true}
I0124 17:49:44.508609 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03"
E0124 17:49:44.662615 147217 start.go:311] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:49:44.543023 3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.662638 147217 start.go:314] resetting worker node "m03" before attempting to rejoin cluster...
I0124 17:49:44.662652 147217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force"
I0124 17:49:44.700290 147217 start.go:316] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --force": Process exited with status 1
stdout:
stderr:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
I0124 17:49:44.700323 147217 start.go:290] JoinCluster complete in 2m19.142337617s
I0124 17:49:44.703679 147217 out.go:177]
W0124 17:49:44.705552 147217 out.go:239] X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:49:44.543023 3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to GUEST_NODE_START: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m26qli.r215g2d7opn8zgb6 --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m03": Process exited with status 1
stdout:
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
stderr:
W0124 17:49:44.543023 3162 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
[WARNING Port-10250]: Port 10250 is in use
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
error execution phase kubelet-start: a Node with name "multinode-585561-m03" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
W0124 17:49:44.705572 147217 out.go:239] *
*
W0124 17:49:44.707691 147217 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0124 17:49:44.709648 147217 out.go:177]
multinode_test.go:255: node start returned an error. args "out/minikube-linux-amd64 -p multinode-585561 node start m03 --alsologtostderr": exit status 80
multinode_test.go:259: (dbg) Run: out/minikube-linux-amd64 -p multinode-585561 status
multinode_test.go:273: (dbg) Run: kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect multinode-585561
helpers_test.go:235: (dbg) docker inspect multinode-585561:
-- stdout --
[
{
"Id": "cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37",
"Created": "2023-01-24T17:45:29.114439725Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 128759,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-01-24T17:45:29.477128533Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
"ResolvConfPath": "/var/lib/docker/containers/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/hostname",
"HostsPath": "/var/lib/docker/containers/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/hosts",
"LogPath": "/var/lib/docker/containers/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37/cff9d026e22ca14f96db37a0580454cc07625aacc551ba8fc57fb88021a2ca37-json.log",
"Name": "/multinode-585561",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"multinode-585561:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "multinode-585561",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/5176a5b637dd727c44326828be1595b5e60bbc0608ab2936267a87c6decac99f-init/diff:/var/lib/docker/overlay2/c0f6dd4fb02f7ad02ac9f070fe21bdce826b05ddd2d4864f5d03facc86ec9ecc/diff:/var/lib/docker/overlay2/d2765ba50729ba695f42f46a7962c3519217eee28174849e85afadbf6b0e02d6/diff:/var/lib/docker/overlay2/309bf5708416378c17fc70427d4f2456f99f7fba90e3a234d34bfe13a2c59f12/diff:/var/lib/docker/overlay2/56f885e6f444248a029fc5b9208073963c6309559557c10307b26dcf0e30a995/diff:/var/lib/docker/overlay2/9ba0736edb7b66db737f51458527fbdb399a0807534f33ddc2f44cda6a8bd6d1/diff:/var/lib/docker/overlay2/f4e07abaa5d333f487a0edb77aad2f0af86ce4fd18c9a638cb401437a32f4d74/diff:/var/lib/docker/overlay2/00d3f326fb5e24a0682a26ab4f237656d873e100c29691fdb55be303b2185d58/diff:/var/lib/docker/overlay2/39df02652678fc73d7f221b726c0a3c6f4d6829085620f3480306ee5366370a8/diff:/var/lib/docker/overlay2/f89bbc718777cb4603fad4be8968b39ceee7410ad49ad3efdec549691abb15e9/diff:/var/lib/docker/overlay2/0bc828
e5958e3308bc5bc21337653e4c70d63cf0250c7a996820d7e263d4b782/diff:/var/lib/docker/overlay2/960bb317e53c181050c19f97b8bdf3f8ea1ee37186960c105f4216b9a1db2749/diff:/var/lib/docker/overlay2/020e2ab5c70c297cee27e775db50c2d397921e19e31d24f8e0fffb93ccc480ee/diff:/var/lib/docker/overlay2/38292f0ce0a8c510703a3889510830c29e47c20fc6b836727d66579217b4aa9c/diff:/var/lib/docker/overlay2/2240207f0bcbbbf807a6a2f426df2f218dbe10587d8c23f4b3470573e5d95fd4/diff:/var/lib/docker/overlay2/5cb29ea4ba6b3e37954a7dcd08d3090fdea350f0feee4ec33fa89009397f4df0/diff:/var/lib/docker/overlay2/e020b8a1019b51428090e953307cfb464abb245cb10162522f9ce462cba4eae3/diff:/var/lib/docker/overlay2/dedc1cd320ab9a914dcc9de1bc6dc55b769c26e01b2e220e5b101264cf3885fd/diff:/var/lib/docker/overlay2/d57af40191f2b434bba5bb6d799793eac2c6cb2d478bd7c64158ab270aa7b748/diff:/var/lib/docker/overlay2/6405dc6842891f477861f193833a331c97a4ca02fce456ec2e80aad9de94b015/diff:/var/lib/docker/overlay2/631e58303634bfa60e5c502ec2f568a62c2b2169ae462f1171b3146cf04f5f7e/diff:/var/lib/d
ocker/overlay2/d29fa359059801155d9599e76a6845758ba216d5ea775b01d6ae8f696b8c456b/diff:/var/lib/docker/overlay2/28b702bccbb33fa6cd49012bc362d952de52ad467f4ea93354db79737ae22b03/diff:/var/lib/docker/overlay2/8a7d52ec1a3e894eed2d4271f1df503d0f8cda630fcd1bc15af62184bdaf3d65/diff:/var/lib/docker/overlay2/c9b7f9ea4c8b40bcc4e5041c580dfe6d3517781f4dfddcda0d8aaa7e109a0ec2/diff:/var/lib/docker/overlay2/df47b021373f0eceb801029054f0d9f0612b49d3207f2d163077ad905f488ee5/diff:/var/lib/docker/overlay2/fcf3520ccb48ac6dadbebea4e85a539d1859a06405e690466352317f35b7f17f/diff:/var/lib/docker/overlay2/4d2edf4c993582a042a54f29c78e7524a1f5846a4f6f25463d664b4a4b03d878/diff:/var/lib/docker/overlay2/672267cb3f0664c4fcacd27e02917b0edeaa3867c70baef5dc534a8ccf798ffb/diff:/var/lib/docker/overlay2/ded6694e77d8f645d1aeb5353d7c912883d93df91f1d122bba1d3eabe5aeb5ca/diff:/var/lib/docker/overlay2/d5d7bc0be8ec3dd554cb0bdff490dbfa92cd679d68e433547ce0a558813ded64/diff:/var/lib/docker/overlay2/d992f24d356c8b6303454fa3c4ed34187fa10b2a85830839330cd2866c1
27932/diff:/var/lib/docker/overlay2/625d4aee0fbd36cfefdd61cff165ebb6ea2c45b21cb93928bc8b16ee0289581b/diff:/var/lib/docker/overlay2/b487e0d1b131079e1ed93646b9aab301046224986d2d47a8083397694a7699ec/diff:/var/lib/docker/overlay2/6acd12e207d6d8b1422a0897a896c591cb9e3301c4c46d83c5a2b6e40019dd19/diff:/var/lib/docker/overlay2/5944c728d3d43299b8773a799379ebcf947ab2447a83f1adcc32731fb40ced3c/diff:/var/lib/docker/overlay2/12c67321e07ad577eba23729dc9d9a2edb3a8d4c7de3a1c682c411c00cd14dac/diff:/var/lib/docker/overlay2/89073ac9d49633306646e6ada568a9647c4a88d657e60fd2a0daa3a2bb970598/diff:/var/lib/docker/overlay2/0a290286677b74fb640d9cd6b48d3579d79f4ca62157270f290b74f6a606adf2/diff:/var/lib/docker/overlay2/fccecd53fbac0d1318c0a0f27a725dbaddd955866823c94258132b2db0e10339/diff:/var/lib/docker/overlay2/3f7d25eebece90d8e38d92efa5522717838a52fcf6de68a61a2f3922139ad36c/diff:/var/lib/docker/overlay2/84563ab9d1af117abaf3eadbdfbcd03d46c79f907aa260d46bf795185eaf69b8/diff:/var/lib/docker/overlay2/112ca0d95ec4e2fcaa4a352262498bde563dd0
dcbe1b5a8fb9635be152bae4f9/diff:/var/lib/docker/overlay2/956687ef2d7ff7d948d0cb4b6415751cd49516ed63b9293d0871ca6c6e99af68/diff:/var/lib/docker/overlay2/edb008e0ceae1ade25c3f42e96590263af39296507e3518acc6462d2b9f227d5/diff",
"MergedDir": "/var/lib/docker/overlay2/5176a5b637dd727c44326828be1595b5e60bbc0608ab2936267a87c6decac99f/merged",
"UpperDir": "/var/lib/docker/overlay2/5176a5b637dd727c44326828be1595b5e60bbc0608ab2936267a87c6decac99f/diff",
"WorkDir": "/var/lib/docker/overlay2/5176a5b637dd727c44326828be1595b5e60bbc0608ab2936267a87c6decac99f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "multinode-585561",
"Source": "/var/lib/docker/volumes/multinode-585561/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "multinode-585561",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "multinode-585561",
"name.minikube.sigs.k8s.io": "multinode-585561",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "9c31a4ba7465924ba53f6cfa5e1d4ad4c332e4b060ae018292262ea6df072860",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32852"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32851"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32848"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32850"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32849"
}
]
},
"SandboxKey": "/var/run/docker/netns/9c31a4ba7465",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"multinode-585561": {
"IPAMConfig": {
"IPv4Address": "192.168.58.2"
},
"Links": null,
"Aliases": [
"cff9d026e22c",
"multinode-585561"
],
"NetworkID": "d58778b719d578917d23962b648cc107a0848a4f0a97bc7f4d60b63c79e3010d",
"EndpointID": "cd6653c36b268ea98f13f5d4e84fec087782f789f4015ce7fb10e4f474a0084f",
"Gateway": "192.168.58.1",
"IPAddress": "192.168.58.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:3a:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-585561 -n multinode-585561
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-585561 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-585561 logs -n 25: (1.097238732s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| cp | multinode-585561 cp multinode-585561:/home/docker/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m03:/home/docker/cp-test_multinode-585561_multinode-585561-m03.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-585561 ssh -n multinode-585561-m03 sudo cat | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | /home/docker/cp-test_multinode-585561_multinode-585561-m03.txt | | | | | |
| cp | multinode-585561 cp testdata/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m02:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | /tmp/TestMultiNodeserialCopyFile2351278162/001/cp-test_multinode-585561-m02.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561:/home/docker/cp-test_multinode-585561-m02_multinode-585561.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-585561 ssh -n multinode-585561 sudo cat | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | /home/docker/cp-test_multinode-585561-m02_multinode-585561.txt | | | | | |
| cp | multinode-585561 cp multinode-585561-m02:/home/docker/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m03:/home/docker/cp-test_multinode-585561-m02_multinode-585561-m03.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-585561 ssh -n multinode-585561-m03 sudo cat | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | /home/docker/cp-test_multinode-585561-m02_multinode-585561-m03.txt | | | | | |
| cp | multinode-585561 cp testdata/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | /tmp/TestMultiNodeserialCopyFile2351278162/001/cp-test_multinode-585561-m03.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561:/home/docker/cp-test_multinode-585561-m03_multinode-585561.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-585561 ssh -n multinode-585561 sudo cat | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | /home/docker/cp-test_multinode-585561-m03_multinode-585561.txt | | | | | |
| cp | multinode-585561 cp multinode-585561-m03:/home/docker/cp-test.txt | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m02:/home/docker/cp-test_multinode-585561-m03_multinode-585561-m02.txt | | | | | |
| ssh | multinode-585561 ssh -n | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | multinode-585561-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-585561 ssh -n multinode-585561-m02 sudo cat | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| | /home/docker/cp-test_multinode-585561-m03_multinode-585561-m02.txt | | | | | |
| node | multinode-585561 node stop m03 | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | 24 Jan 23 17:47 UTC |
| node | multinode-585561 node start | multinode-585561 | jenkins | v1.28.0 | 24 Jan 23 17:47 UTC | |
| | m03 --alsologtostderr | | | | | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/24 17:45:22
Running on machine: ubuntu-20-agent
Binary: Built with gc go1.19.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0124 17:45:22.740102 128080 out.go:296] Setting OutFile to fd 1 ...
I0124 17:45:22.740318 128080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 17:45:22.740351 128080 out.go:309] Setting ErrFile to fd 2...
I0124 17:45:22.740363 128080 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0124 17:45:22.740794 128080 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15565-3637/.minikube/bin
I0124 17:45:22.741486 128080 out.go:303] Setting JSON to false
I0124 17:45:22.742880 128080 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1667,"bootTime":1674580656,"procs":872,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0124 17:45:22.742950 128080 start.go:135] virtualization: kvm guest
I0124 17:45:22.745812 128080 out.go:177] * [multinode-585561] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I0124 17:45:22.747434 128080 out.go:177] - MINIKUBE_LOCATION=15565
I0124 17:45:22.747386 128080 notify.go:220] Checking for updates...
I0124 17:45:22.749323 128080 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0124 17:45:22.751222 128080 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15565-3637/kubeconfig
I0124 17:45:22.752872 128080 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15565-3637/.minikube
I0124 17:45:22.754314 128080 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0124 17:45:22.755958 128080 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0124 17:45:22.757539 128080 driver.go:365] Setting default libvirt URI to qemu:///system
I0124 17:45:22.784306 128080 docker.go:141] docker version: linux-20.10.23:Docker Engine - Community
I0124 17:45:22.784426 128080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0124 17:45:22.877477 128080 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-24 17:45:22.803388442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0124 17:45:22.877624 128080 docker.go:282] overlay module found
I0124 17:45:22.879951 128080 out.go:177] * Using the docker driver based on user configuration
I0124 17:45:22.881441 128080 start.go:296] selected driver: docker
I0124 17:45:22.881461 128080 start.go:840] validating driver "docker" against <nil>
I0124 17:45:22.881472 128080 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0124 17:45:22.882208 128080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0124 17:45:22.975806 128080 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2023-01-24 17:45:22.900637343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1027-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b842e528e99d4d4c1686467debf2bd4b88ecd86 Expected:5b842e528e99d4d4c1686467debf2bd4b88ecd86} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0124 17:45:22.975935 128080 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0124 17:45:22.976109 128080 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0124 17:45:22.978387 128080 out.go:177] * Using Docker driver with root privileges
I0124 17:45:22.979872 128080 cni.go:84] Creating CNI manager for ""
I0124 17:45:22.979887 128080 cni.go:136] 0 nodes found, recommending kindnet
I0124 17:45:22.979895 128080 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
I0124 17:45:22.979904 128080 start_flags.go:319] config:
{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0124 17:45:22.981487 128080 out.go:177] * Starting control plane node multinode-585561 in cluster multinode-585561
I0124 17:45:22.982794 128080 cache.go:120] Beginning downloading kic base image for docker with docker
I0124 17:45:22.984317 128080 out.go:177] * Pulling base image ...
I0124 17:45:22.985690 128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0124 17:45:22.985735 128080 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0124 17:45:22.985744 128080 cache.go:57] Caching tarball of preloaded images
I0124 17:45:22.985808 128080 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
I0124 17:45:22.985864 128080 preload.go:174] Found /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0124 17:45:22.985880 128080 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0124 17:45:22.986242 128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:45:22.986265 128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json: {Name:mkd32f750addef5e117c6c613ad00e8eb787ff9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:45:23.008849 128080 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
I0124 17:45:23.008887 128080 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
I0124 17:45:23.008909 128080 cache.go:193] Successfully downloaded all kic artifacts
I0124 17:45:23.008950 128080 start.go:364] acquiring machines lock for multinode-585561: {Name:mkedb2101c6d898ca1123ce19efb5691312160a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 17:45:23.009073 128080 start.go:368] acquired machines lock for "multinode-585561" in 98.617µs
I0124 17:45:23.009100 128080 start.go:93] Provisioning new machine with config: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0124 17:45:23.009192 128080 start.go:125] createHost starting for "" (driver="docker")
I0124 17:45:23.011894 128080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0124 17:45:23.012139 128080 start.go:159] libmachine.API.Create for "multinode-585561" (driver="docker")
I0124 17:45:23.012171 128080 client.go:168] LocalClient.Create starting
I0124 17:45:23.012239 128080 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem
I0124 17:45:23.012279 128080 main.go:141] libmachine: Decoding PEM data...
I0124 17:45:23.012307 128080 main.go:141] libmachine: Parsing certificate...
I0124 17:45:23.012394 128080 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem
I0124 17:45:23.012425 128080 main.go:141] libmachine: Decoding PEM data...
I0124 17:45:23.012443 128080 main.go:141] libmachine: Parsing certificate...
I0124 17:45:23.012836 128080 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0124 17:45:23.034132 128080 cli_runner.go:211] docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0124 17:45:23.034202 128080 network_create.go:281] running [docker network inspect multinode-585561] to gather additional debugging logs...
I0124 17:45:23.034219 128080 cli_runner.go:164] Run: docker network inspect multinode-585561
W0124 17:45:23.055690 128080 cli_runner.go:211] docker network inspect multinode-585561 returned with exit code 1
I0124 17:45:23.055721 128080 network_create.go:284] error running [docker network inspect multinode-585561]: docker network inspect multinode-585561: exit status 1
stdout:
[]
stderr:
Error: No such network: multinode-585561
I0124 17:45:23.055733 128080 network_create.go:286] output of [docker network inspect multinode-585561]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: multinode-585561
** /stderr **
I0124 17:45:23.056076 128080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0124 17:45:23.079066 128080 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7362ae67aae9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fe:8a:74:74} reservation:<nil>}
I0124 17:45:23.079704 128080 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003cefd0}
I0124 17:45:23.079731 128080 network_create.go:123] attempt to create docker network multinode-585561 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0124 17:45:23.079781 128080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-585561 multinode-585561
I0124 17:45:23.135188 128080 network_create.go:107] docker network multinode-585561 192.168.58.0/24 created
I0124 17:45:23.135214 128080 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-585561" container
I0124 17:45:23.135271 128080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0124 17:45:23.156984 128080 cli_runner.go:164] Run: docker volume create multinode-585561 --label name.minikube.sigs.k8s.io=multinode-585561 --label created_by.minikube.sigs.k8s.io=true
I0124 17:45:23.179713 128080 oci.go:103] Successfully created a docker volume multinode-585561
I0124 17:45:23.179809 128080 cli_runner.go:164] Run: docker run --rm --name multinode-585561-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-585561 --entrypoint /usr/bin/test -v multinode-585561:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
I0124 17:45:23.761168 128080 oci.go:107] Successfully prepared a docker volume multinode-585561
I0124 17:45:23.761204 128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0124 17:45:23.761225 128080 kic.go:190] Starting extracting preloaded images to volume ...
I0124 17:45:23.761292 128080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-585561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
I0124 17:45:28.997184 128080 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-585561:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (5.235828225s)
I0124 17:45:28.997211 128080 kic.go:199] duration metric: took 5.235984 seconds to extract preloaded images to volume
W0124 17:45:28.997350 128080 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0124 17:45:28.997453 128080 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0124 17:45:29.091231 128080 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-585561 --name multinode-585561 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-585561 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-585561 --network multinode-585561 --ip 192.168.58.2 --volume multinode-585561:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
I0124 17:45:29.486016 128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Running}}
I0124 17:45:29.510635 128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:45:29.535825 128080 cli_runner.go:164] Run: docker exec multinode-585561 stat /var/lib/dpkg/alternatives/iptables
I0124 17:45:29.583614 128080 oci.go:144] the created container "multinode-585561" has a running status.
I0124 17:45:29.583651 128080 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa...
I0124 17:45:29.988873 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0124 17:45:29.988916 128080 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0124 17:45:30.050707 128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:45:30.074564 128080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0124 17:45:30.074589 128080 kic_runner.go:114] Args: [docker exec --privileged multinode-585561 chown docker:docker /home/docker/.ssh/authorized_keys]
I0124 17:45:30.151420 128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:45:30.174110 128080 machine.go:88] provisioning docker machine ...
I0124 17:45:30.174164 128080 ubuntu.go:169] provisioning hostname "multinode-585561"
I0124 17:45:30.174237 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:30.196678 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:45:30.196974 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0124 17:45:30.197003 128080 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-585561 && echo "multinode-585561" | sudo tee /etc/hostname
I0124 17:45:30.337467 128080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-585561
I0124 17:45:30.337539 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:30.359909 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:45:30.360054 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0124 17:45:30.360074 128080 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-585561' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-585561/g' /etc/hosts;
else
echo '127.0.1.1 multinode-585561' | sudo tee -a /etc/hosts;
fi
fi
I0124 17:45:30.488217 128080 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0124 17:45:30.488252 128080 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3637/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3637/.minikube}
I0124 17:45:30.488276 128080 ubuntu.go:177] setting up certificates
I0124 17:45:30.488285 128080 provision.go:83] configureAuth start
I0124 17:45:30.488336 128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561
I0124 17:45:30.510820 128080 provision.go:138] copyHostCerts
I0124 17:45:30.510855 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
I0124 17:45:30.510889 128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem, removing ...
I0124 17:45:30.510896 128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
I0124 17:45:30.510972 128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem (1078 bytes)
I0124 17:45:30.511056 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
I0124 17:45:30.511075 128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem, removing ...
I0124 17:45:30.511079 128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
I0124 17:45:30.511113 128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem (1123 bytes)
I0124 17:45:30.511167 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
I0124 17:45:30.511186 128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem, removing ...
I0124 17:45:30.511195 128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
I0124 17:45:30.511230 128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem (1679 bytes)
I0124 17:45:30.511288 128080 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem org=jenkins.multinode-585561 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-585561]
I0124 17:45:30.597657 128080 provision.go:172] copyRemoteCerts
I0124 17:45:30.597711 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0124 17:45:30.597741 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:30.620825 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:45:30.715730 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0124 17:45:30.715814 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0124 17:45:30.733077 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem -> /etc/docker/server.pem
I0124 17:45:30.733135 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0124 17:45:30.750117 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0124 17:45:30.750172 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0124 17:45:30.766220 128080 provision.go:86] duration metric: configureAuth took 277.917524ms
I0124 17:45:30.766248 128080 ubuntu.go:193] setting minikube options for container-runtime
I0124 17:45:30.766448 128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:45:30.766499 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:30.789654 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:45:30.789821 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0124 17:45:30.789842 128080 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0124 17:45:30.920605 128080 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0124 17:45:30.920632 128080 ubuntu.go:71] root file system type: overlay
I0124 17:45:30.920819 128080 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0124 17:45:30.920896 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:30.944263 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:45:30.944406 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0124 17:45:30.944464 128080 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0124 17:45:31.081621 128080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0124 17:45:31.081701 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:31.104877 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:45:31.105013 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32852 <nil> <nil>}
I0124 17:45:31.105031 128080 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0124 17:45:31.733469 128080 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-12-15 22:25:58.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-24 17:45:31.080138432 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0124 17:45:31.733506 128080 machine.go:91] provisioned docker machine in 1.559368678s
I0124 17:45:31.733517 128080 client.go:171] LocalClient.Create took 8.721340407s
I0124 17:45:31.733536 128080 start.go:167] duration metric: libmachine.API.Create for "multinode-585561" took 8.721396631s
I0124 17:45:31.733554 128080 start.go:300] post-start starting for "multinode-585561" (driver="docker")
I0124 17:45:31.733561 128080 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0124 17:45:31.733623 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0124 17:45:31.733681 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:31.756770 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:45:31.847792 128080 ssh_runner.go:195] Run: cat /etc/os-release
I0124 17:45:31.850310 128080 command_runner.go:130] > NAME="Ubuntu"
I0124 17:45:31.850331 128080 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
I0124 17:45:31.850338 128080 command_runner.go:130] > ID=ubuntu
I0124 17:45:31.850345 128080 command_runner.go:130] > ID_LIKE=debian
I0124 17:45:31.850353 128080 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
I0124 17:45:31.850360 128080 command_runner.go:130] > VERSION_ID="20.04"
I0124 17:45:31.850369 128080 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
I0124 17:45:31.850376 128080 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
I0124 17:45:31.850388 128080 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I0124 17:45:31.850403 128080 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I0124 17:45:31.850415 128080 command_runner.go:130] > VERSION_CODENAME=focal
I0124 17:45:31.850421 128080 command_runner.go:130] > UBUNTU_CODENAME=focal
I0124 17:45:31.850490 128080 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0124 17:45:31.850520 128080 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0124 17:45:31.850538 128080 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0124 17:45:31.850549 128080 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0124 17:45:31.850562 128080 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/addons for local assets ...
I0124 17:45:31.850662 128080 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/files for local assets ...
I0124 17:45:31.850748 128080 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> 101262.pem in /etc/ssl/certs
I0124 17:45:31.850760 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> /etc/ssl/certs/101262.pem
I0124 17:45:31.850850 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0124 17:45:31.857267 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:45:31.874214 128080 start.go:303] post-start completed in 140.646195ms
I0124 17:45:31.874580 128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561
I0124 17:45:31.896951 128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:45:31.897246 128080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0124 17:45:31.897299 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:31.919718 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:45:32.008735 128080 command_runner.go:130] > 23%!
(MISSING)I0124 17:45:32.008828 128080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0124 17:45:32.012715 128080 command_runner.go:130] > 227G
I0124 17:45:32.012738 128080 start.go:128] duration metric: createHost completed in 9.003538767s
I0124 17:45:32.012746 128080 start.go:83] releasing machines lock for "multinode-585561", held for 9.003658553s
I0124 17:45:32.012799 128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561
I0124 17:45:32.034917 128080 ssh_runner.go:195] Run: cat /version.json
I0124 17:45:32.034963 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:32.034984 128080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0124 17:45:32.035043 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:45:32.058262 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:45:32.058824 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:45:32.147599 128080 command_runner.go:130] > {"iso_version": "v1.28.0-1672850525-15541", "kicbase_version": "v0.0.36-1674164627-15541", "minikube_version": "v1.28.0", "commit": "09f10d7ce80c70492bae8df2b479c8e82a922c68"}
I0124 17:45:32.147771 128080 ssh_runner.go:195] Run: systemctl --version
I0124 17:45:32.174980 128080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0124 17:45:32.176573 128080 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
I0124 17:45:32.176606 128080 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
I0124 17:45:32.176675 128080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0124 17:45:32.180451 128080 command_runner.go:130] > File: /etc/cni/net.d/200-loopback.conf
I0124 17:45:32.180473 128080 command_runner.go:130] > Size: 54 Blocks: 8 IO Block: 4096 regular file
I0124 17:45:32.180479 128080 command_runner.go:130] > Device: 34h/52d Inode: 538245 Links: 1
I0124 17:45:32.180485 128080 command_runner.go:130] > Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
I0124 17:45:32.180491 128080 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
I0124 17:45:32.180495 128080 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
I0124 17:45:32.180518 128080 command_runner.go:130] > Change: 2023-01-24 17:29:01.213660493 +0000
I0124 17:45:32.180529 128080 command_runner.go:130] > Birth: -
I0124 17:45:32.180755 128080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0124 17:45:32.200290 128080 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0124 17:45:32.200403 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0124 17:45:32.207385 128080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0124 17:45:32.219767 128080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0124 17:45:32.235632 128080 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf,
I0124 17:45:32.235673 128080 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0124 17:45:32.235688 128080 start.go:472] detecting cgroup driver to use...
I0124 17:45:32.235720 128080 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:45:32.235879 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:45:32.247792 128080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0124 17:45:32.247816 128080 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0124 17:45:32.248468 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0124 17:45:32.256045 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0124 17:45:32.264209 128080 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0124 17:45:32.264275 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0124 17:45:32.272067 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:45:32.279990 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0124 17:45:32.287462 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:45:32.294927 128080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0124 17:45:32.302022 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0124 17:45:32.309785 128080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0124 17:45:32.316018 128080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0124 17:45:32.316074 128080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0124 17:45:32.322494 128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:45:32.395373 128080 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0124 17:45:32.476530 128080 start.go:472] detecting cgroup driver to use...
I0124 17:45:32.476583 128080 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:45:32.476627 128080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0124 17:45:32.486372 128080 command_runner.go:130] > # /lib/systemd/system/docker.service
I0124 17:45:32.486396 128080 command_runner.go:130] > [Unit]
I0124 17:45:32.486407 128080 command_runner.go:130] > Description=Docker Application Container Engine
I0124 17:45:32.486416 128080 command_runner.go:130] > Documentation=https://docs.docker.com
I0124 17:45:32.486424 128080 command_runner.go:130] > BindsTo=containerd.service
I0124 17:45:32.486433 128080 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
I0124 17:45:32.486441 128080 command_runner.go:130] > Wants=network-online.target
I0124 17:45:32.486452 128080 command_runner.go:130] > Requires=docker.socket
I0124 17:45:32.486459 128080 command_runner.go:130] > StartLimitBurst=3
I0124 17:45:32.486477 128080 command_runner.go:130] > StartLimitIntervalSec=60
I0124 17:45:32.486486 128080 command_runner.go:130] > [Service]
I0124 17:45:32.486493 128080 command_runner.go:130] > Type=notify
I0124 17:45:32.486503 128080 command_runner.go:130] > Restart=on-failure
I0124 17:45:32.486521 128080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0124 17:45:32.486536 128080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0124 17:45:32.486549 128080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0124 17:45:32.486564 128080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0124 17:45:32.486574 128080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0124 17:45:32.486587 128080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0124 17:45:32.486602 128080 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0124 17:45:32.486619 128080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0124 17:45:32.486633 128080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0124 17:45:32.486643 128080 command_runner.go:130] > ExecStart=
I0124 17:45:32.486670 128080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
I0124 17:45:32.486706 128080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0124 17:45:32.486718 128080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0124 17:45:32.486732 128080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0124 17:45:32.486742 128080 command_runner.go:130] > LimitNOFILE=infinity
I0124 17:45:32.486749 128080 command_runner.go:130] > LimitNPROC=infinity
I0124 17:45:32.486759 128080 command_runner.go:130] > LimitCORE=infinity
I0124 17:45:32.486768 128080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0124 17:45:32.486779 128080 command_runner.go:130] > # Only systemd 226 and above support this version.
I0124 17:45:32.486786 128080 command_runner.go:130] > TasksMax=infinity
I0124 17:45:32.486793 128080 command_runner.go:130] > TimeoutStartSec=0
I0124 17:45:32.486800 128080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0124 17:45:32.486809 128080 command_runner.go:130] > Delegate=yes
I0124 17:45:32.486819 128080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0124 17:45:32.486829 128080 command_runner.go:130] > KillMode=process
I0124 17:45:32.486840 128080 command_runner.go:130] > [Install]
I0124 17:45:32.486850 128080 command_runner.go:130] > WantedBy=multi-user.target
I0124 17:45:32.487238 128080 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0124 17:45:32.487297 128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0124 17:45:32.497393 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:45:32.509897 128080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0124 17:45:32.509922 128080 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0124 17:45:32.510780 128080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0124 17:45:32.590227 128080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0124 17:45:32.681957 128080 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0124 17:45:32.681987 128080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0124 17:45:32.695875 128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:45:32.778228 128080 ssh_runner.go:195] Run: sudo systemctl restart docker
I0124 17:45:32.974504 128080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:45:33.052236 128080 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
I0124 17:45:33.052321 128080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0124 17:45:33.123624 128080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:45:33.196557 128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:45:33.273352 128080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0124 17:45:33.284600 128080 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0124 17:45:33.284652 128080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0124 17:45:33.287650 128080 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0124 17:45:33.287672 128080 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0124 17:45:33.287681 128080 command_runner.go:130] > Device: 3fh/63d Inode: 206 Links: 1
I0124 17:45:33.287693 128080 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 999/ docker)
I0124 17:45:33.287707 128080 command_runner.go:130] > Access: 2023-01-24 17:45:33.280295086 +0000
I0124 17:45:33.287711 128080 command_runner.go:130] > Modify: 2023-01-24 17:45:33.280295086 +0000
I0124 17:45:33.287716 128080 command_runner.go:130] > Change: 2023-01-24 17:45:33.280295086 +0000
I0124 17:45:33.287723 128080 command_runner.go:130] > Birth: -
I0124 17:45:33.287734 128080 start.go:540] Will wait 60s for crictl version
I0124 17:45:33.287779 128080 ssh_runner.go:195] Run: which crictl
I0124 17:45:33.290385 128080 command_runner.go:130] > /usr/bin/crictl
I0124 17:45:33.290428 128080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0124 17:45:33.380043 128080 command_runner.go:130] > Version: 0.1.0
I0124 17:45:33.380067 128080 command_runner.go:130] > RuntimeName: docker
I0124 17:45:33.380076 128080 command_runner.go:130] > RuntimeVersion: 20.10.22
I0124 17:45:33.380084 128080 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0124 17:45:33.381625 128080 start.go:556] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.22
RuntimeApiVersion: v1alpha2
I0124 17:45:33.381688 128080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:45:33.408137 128080 command_runner.go:130] > 20.10.22
I0124 17:45:33.408211 128080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:45:33.435476 128080 command_runner.go:130] > 20.10.22
I0124 17:45:33.438291 128080 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
I0124 17:45:33.438356 128080 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0124 17:45:33.461481 128080 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0124 17:45:33.464760 128080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:45:33.474118 128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0124 17:45:33.474185 128080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0124 17:45:33.496239 128080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0124 17:45:33.496266 128080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0124 17:45:33.496275 128080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0124 17:45:33.496284 128080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0124 17:45:33.496292 128080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.4
I0124 17:45:33.496300 128080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:45:33.496309 128080 command_runner.go:130] > registry.k8s.io/etcd:v3.3.8-0-gke.1
I0124 17:45:33.496316 128080 command_runner.go:130] > registry.k8s.io/pause:test2
I0124 17:45:33.496349 128080 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/coredns/coredns:v1.9.4
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/etcd:v3.3.8-0-gke.1
registry.k8s.io/pause:test2
-- /stdout --
I0124 17:45:33.496361 128080 docker.go:636] registry.k8s.io/pause:3.9 wasn't preloaded
I0124 17:45:33.496395 128080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0124 17:45:33.503133 128080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.4":"sha256:a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d","registry.k8s.io/coredns/coredns@sha256:b82e294de6be763f73ae71266c8f5466e7e03c69f3a1de96efd570284d35bb18":"sha256:a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:v3.3.8-0-gke.1":"sha256:2a575b86cb35225ed31fa5ee639ff14359a79b40982ce2bc6a5a36f642f9e97b","registry.k8s.io/etcd@sha256:786ab1b91730b4171748511553abebaf73df1b5e8f1283d4bb5561728ae47fd5":"sha256:2a575b86cb35225ed31fa5
ee639ff14359a79b40982ce2bc6a5a36f642f9e97b"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c
4f63ed03c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:test2":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","registry.k8s.io/pause@sha256:0c17b6b35fafb2de159db2af2c0e40a4c1aa1a210bac1b65fbf807f105899146":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06"}}}
I0124 17:45:33.503296 128080 ssh_runner.go:195] Run: which lz4
I0124 17:45:33.506024 128080 command_runner.go:130] > /usr/bin/lz4
I0124 17:45:33.506065 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0124 17:45:33.506137 128080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0124 17:45:33.508774 128080 command_runner.go:130] ! stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0124 17:45:33.508914 128080 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0124 17:45:33.508943 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (441986565 bytes)
I0124 17:45:34.228914 128080 docker.go:594] Took 0.722810 seconds to copy over tarball
I0124 17:45:34.228974 128080 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0124 17:45:36.426644 128080 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197646751s)
I0124 17:45:36.426668 128080 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0124 17:45:36.487459 128080 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0124 17:45:36.494297 128080 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.4":"sha256:a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d","registry.k8s.io/coredns/coredns@sha256:b82e294de6be763f73ae71266c8f5466e7e03c69f3a1de96efd570284d35bb18":"sha256:a81c2ec4e946de3f8baa403be700db69454b42b50ab2cd17731f80065c62d42d"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:v3.3.8-0-gke.1":"sha256:2a575b86cb35225ed31fa5ee639ff14359a79b40982ce2bc6a5a36f642f9e97b","registry.k8s.io/etcd@sha256:786ab1b91730b4171748511553abebaf73df1b5e8f1283d4bb5561728ae47fd5":"sha256:2a575b86cb35225ed31fa5
ee639ff14359a79b40982ce2bc6a5a36f642f9e97b"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c
4f63ed03c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:test2":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","registry.k8s.io/pause@sha256:0c17b6b35fafb2de159db2af2c0e40a4c1aa1a210bac1b65fbf807f105899146":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06"}}}
I0124 17:45:36.494473 128080 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2637 bytes)
I0124 17:45:36.507161 128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:45:36.582112 128080 ssh_runner.go:195] Run: sudo systemctl restart docker
I0124 17:45:39.494099 128080 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.911944421s)
I0124 17:45:39.494233 128080 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0124 17:45:39.518359 128080 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
I0124 17:45:39.518379 128080 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
I0124 17:45:39.518384 128080 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
I0124 17:45:39.518389 128080 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
I0124 17:45:39.518394 128080 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.4
I0124 17:45:39.518399 128080 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:45:39.518404 128080 command_runner.go:130] > registry.k8s.io/etcd:v3.3.8-0-gke.1
I0124 17:45:39.518408 128080 command_runner.go:130] > registry.k8s.io/pause:test2
I0124 17:45:39.518440 128080 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/coredns/coredns:v1.9.4
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/etcd:v3.3.8-0-gke.1
registry.k8s.io/pause:test2
-- /stdout --
I0124 17:45:39.518450 128080 docker.go:636] registry.k8s.io/pause:3.9 wasn't preloaded
I0124 17:45:39.518461 128080 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.26.1 registry.k8s.io/kube-controller-manager:v1.26.1 registry.k8s.io/kube-scheduler:v1.26.1 registry.k8s.io/kube-proxy:v1.26.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
I0124 17:45:39.520083 128080 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.26.1
I0124 17:45:39.520153 128080 image.go:134] retrieving image: registry.k8s.io/pause:3.9
I0124 17:45:39.520300 128080 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:45:39.520380 128080 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.26.1
I0124 17:45:39.520398 128080 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.26.1
I0124 17:45:39.520428 128080 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.6-0
I0124 17:45:39.520383 128080 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
I0124 17:45:39.520477 128080 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.26.1
I0124 17:45:39.521147 128080 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.26.1: Error: No such image: registry.k8s.io/kube-controller-manager:v1.26.1
I0124 17:45:39.521176 128080 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error: No such image: registry.k8s.io/pause:3.9
I0124 17:45:39.521211 128080 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.26.1: Error: No such image: registry.k8s.io/kube-apiserver:v1.26.1
I0124 17:45:39.521227 128080 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.26.1: Error: No such image: registry.k8s.io/kube-scheduler:v1.26.1
I0124 17:45:39.521280 128080 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:45:39.521340 128080 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.26.1: Error: No such image: registry.k8s.io/kube-proxy:v1.26.1
I0124 17:45:39.521350 128080 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
I0124 17:45:39.521975 128080 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.6-0: Error: No such image: registry.k8s.io/etcd:3.5.6-0
I0124 17:45:39.670545 128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.6-0
I0124 17:45:39.670545 128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.26.1
I0124 17:45:39.675288 128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.9.3
I0124 17:45:39.678647 128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.26.1
I0124 17:45:39.680658 128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.26.1
I0124 17:45:39.685067 128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.9
I0124 17:45:39.697698 128080 command_runner.go:130] > sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
I0124 17:45:39.702908 128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/etcd:3.5.6-0
I0124 17:45:39.703037 128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.26.1
I0124 17:45:39.705410 128080 cache_images.go:116] "registry.k8s.io/etcd:3.5.6-0" needs transfer: "registry.k8s.io/etcd:3.5.6-0" does not exist at hash "fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7" in container runtime
I0124 17:45:39.705464 128080 docker.go:306] Removing image: registry.k8s.io/etcd:3.5.6-0
I0124 17:45:39.705501 128080 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.6-0
I0124 17:45:39.709780 128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
I0124 17:45:39.709836 128080 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
I0124 17:45:39.709877 128080 docker.go:306] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
I0124 17:45:39.709921 128080 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.9.3
I0124 17:45:39.738381 128080 command_runner.go:130] > sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
I0124 17:45:39.744815 128080 command_runner.go:130] > sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
I0124 17:45:39.746542 128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/pause:3.9
I0124 17:45:39.746614 128080 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
I0124 17:45:39.746651 128080 docker.go:306] Removing image: registry.k8s.io/pause:3.9
I0124 17:45:39.746694 128080 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.9
I0124 17:45:39.755440 128080 command_runner.go:130] > sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
I0124 17:45:39.756584 128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/etcd:3.5.6-0
I0124 17:45:39.756646 128080 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0
I0124 17:45:39.756670 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 -> /var/lib/minikube/images/etcd_3.5.6-0
I0124 17:45:39.756747 128080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.6-0
I0124 17:45:39.760357 128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
I0124 17:45:39.760403 128080 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
I0124 17:45:39.760428 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 -> /var/lib/minikube/images/coredns_v1.9.3
I0124 17:45:39.760485 128080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3
I0124 17:45:39.770551 128080 command_runner.go:130] ! Error: No such image: registry.k8s.io/pause:3.9
I0124 17:45:39.770610 128080 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
I0124 17:45:39.770635 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 -> /var/lib/minikube/images/pause_3.9
I0124 17:45:39.770636 128080 command_runner.go:130] ! stat: cannot stat '/var/lib/minikube/images/etcd_3.5.6-0': No such file or directory
I0124 17:45:39.770668 128080 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.6-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.6-0: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/etcd_3.5.6-0': No such file or directory
I0124 17:45:39.770691 128080 command_runner.go:130] ! stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
I0124 17:45:39.770699 128080 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
I0124 17:45:39.770695 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 --> /var/lib/minikube/images/etcd_3.5.6-0 (102545408 bytes)
I0124 17:45:39.770731 128080 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
I0124 17:45:39.770747 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
I0124 17:45:39.775700 128080 command_runner.go:130] ! stat: cannot stat '/var/lib/minikube/images/pause_3.9': No such file or directory
I0124 17:45:39.776088 128080 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/pause_3.9': No such file or directory
I0124 17:45:39.776112 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (322048 bytes)
I0124 17:45:39.852449 128080 docker.go:273] Loading image: /var/lib/minikube/images/pause_3.9
I0124 17:45:39.852485 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.9 | docker load"
I0124 17:45:40.080634 128080 command_runner.go:130] > Loaded image: registry.k8s.io/pause:3.9
I0124 17:45:40.081345 128080 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 from cache
I0124 17:45:40.081383 128080 docker.go:273] Loading image: /var/lib/minikube/images/coredns_v1.9.3
I0124 17:45:40.081405 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.9.3 | docker load"
I0124 17:45:40.205386 128080 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:45:40.708141 128080 command_runner.go:130] > Loaded image: registry.k8s.io/coredns/coredns:v1.9.3
I0124 17:45:40.711557 128080 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
I0124 17:45:40.711591 128080 docker.go:273] Loading image: /var/lib/minikube/images/etcd_3.5.6-0
I0124 17:45:40.711605 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load"
I0124 17:45:40.711601 128080 command_runner.go:130] > sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
I0124 17:45:43.685698 128080 command_runner.go:130] > Loaded image: registry.k8s.io/etcd:3.5.6-0
I0124 17:45:43.699960 128080 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load": (2.988329653s)
I0124 17:45:43.699988 128080 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/15565-3637/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 from cache
I0124 17:45:43.700012 128080 cache_images.go:123] Successfully loaded all cached images
I0124 17:45:43.700016 128080 cache_images.go:92] LoadImages completed in 4.181537607s
I0124 17:45:43.700071 128080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0124 17:45:43.766188 128080 command_runner.go:130] > cgroupfs
I0124 17:45:43.766245 128080 cni.go:84] Creating CNI manager for ""
I0124 17:45:43.766255 128080 cni.go:136] 1 nodes found, recommending kindnet
I0124 17:45:43.766264 128080 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0124 17:45:43.766287 128080 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-585561 NodeName:multinode-585561 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0124 17:45:43.766439 128080 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-585561"
kubeletExtraArgs:
node-ip: 192.168.58.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0124 17:45:43.766514 128080 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-585561 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0124 17:45:43.766560 128080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0124 17:45:43.773089 128080 command_runner.go:130] > kubeadm
I0124 17:45:43.773108 128080 command_runner.go:130] > kubectl
I0124 17:45:43.773113 128080 command_runner.go:130] > kubelet
I0124 17:45:43.773616 128080 binaries.go:44] Found k8s binaries, skipping transfer
I0124 17:45:43.773667 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0124 17:45:43.780165 128080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
I0124 17:45:43.792314 128080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0124 17:45:43.804533 128080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
I0124 17:45:43.817082 128080 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0124 17:45:43.819995 128080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:45:43.829201 128080 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561 for IP: 192.168.58.2
I0124 17:45:43.829240 128080 certs.go:186] acquiring lock for shared ca certs: {Name:mk1dc62d6b43bec706eb6ba5de0c4f61edad78b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:45:43.829371 128080 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key
I0124 17:45:43.829405 128080 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key
I0124 17:45:43.829442 128080 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key
I0124 17:45:43.829455 128080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt with IP's: []
I0124 17:45:44.010311 128080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt ...
I0124 17:45:44.010345 128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt: {Name:mk2d37d083f05e3e37e8965ee60d661367fc2e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:45:44.010529 128080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key ...
I0124 17:45:44.010542 128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key: {Name:mka1ee5a4e8e936aba7297183fb01c3e0d44b829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:45:44.010612 128080 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key.cee25041
I0124 17:45:44.010628 128080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0124 17:45:44.233695 128080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt.cee25041 ...
I0124 17:45:44.233735 128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt.cee25041: {Name:mke4bb023677d665ceec185667069cfc8848a1d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:45:44.233896 128080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key.cee25041 ...
I0124 17:45:44.233907 128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key.cee25041: {Name:mkfcc331e089aeb8ebcd481d3bfe9c073ca672c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:45:44.233990 128080 certs.go:333] copying /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt
I0124 17:45:44.234067 128080 certs.go:337] copying /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key
I0124 17:45:44.234118 128080 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key
I0124 17:45:44.234132 128080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt with IP's: []
I0124 17:45:44.414136 128080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt ...
I0124 17:45:44.414168 128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt: {Name:mk42f04a02210acb9676eb8aa41efbf8f98dd76c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:45:44.414340 128080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key ...
I0124 17:45:44.414351 128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key: {Name:mkc04d12dc5e84768be7380c00c4420689c3c21f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:45:44.414421 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0124 17:45:44.414435 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0124 17:45:44.414443 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0124 17:45:44.414454 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0124 17:45:44.414463 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0124 17:45:44.414474 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0124 17:45:44.414482 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0124 17:45:44.414495 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0124 17:45:44.414542 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem (1338 bytes)
W0124 17:45:44.414574 128080 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126_empty.pem, impossibly tiny 0 bytes
I0124 17:45:44.414582 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem (1675 bytes)
I0124 17:45:44.414602 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem (1078 bytes)
I0124 17:45:44.414623 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem (1123 bytes)
I0124 17:45:44.414651 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem (1679 bytes)
I0124 17:45:44.414692 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:45:44.414719 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> /usr/share/ca-certificates/101262.pem
I0124 17:45:44.414737 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0124 17:45:44.414747 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem -> /usr/share/ca-certificates/10126.pem
I0124 17:45:44.415262 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0124 17:45:44.433928 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0124 17:45:44.451134 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0124 17:45:44.468595 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0124 17:45:44.485892 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0124 17:45:44.502849 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0124 17:45:44.519562 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0124 17:45:44.536175 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0124 17:45:44.553058 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /usr/share/ca-certificates/101262.pem (1708 bytes)
I0124 17:45:44.570471 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0124 17:45:44.587228 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem --> /usr/share/ca-certificates/10126.pem (1338 bytes)
I0124 17:45:44.604268 128080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0124 17:45:44.616325 128080 ssh_runner.go:195] Run: openssl version
I0124 17:45:44.620913 128080 command_runner.go:130] > OpenSSL 1.1.1f 31 Mar 2020
I0124 17:45:44.621093 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0124 17:45:44.628037 128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0124 17:45:44.631038 128080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
I0124 17:45:44.631068 128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
I0124 17:45:44.631099 128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0124 17:45:44.635478 128080 command_runner.go:130] > b5213941
I0124 17:45:44.635585 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0124 17:45:44.642526 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10126.pem && ln -fs /usr/share/ca-certificates/10126.pem /etc/ssl/certs/10126.pem"
I0124 17:45:44.649442 128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10126.pem
I0124 17:45:44.652244 128080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
I0124 17:45:44.652293 128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
I0124 17:45:44.652338 128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10126.pem
I0124 17:45:44.656688 128080 command_runner.go:130] > 51391683
I0124 17:45:44.656859 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10126.pem /etc/ssl/certs/51391683.0"
I0124 17:45:44.663766 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101262.pem && ln -fs /usr/share/ca-certificates/101262.pem /etc/ssl/certs/101262.pem"
I0124 17:45:44.670660 128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101262.pem
I0124 17:45:44.673361 128080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
I0124 17:45:44.673430 128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
I0124 17:45:44.673469 128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101262.pem
I0124 17:45:44.677678 128080 command_runner.go:130] > 3ec20f2e
I0124 17:45:44.677802 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101262.pem /etc/ssl/certs/3ec20f2e.0"
I0124 17:45:44.684540 128080 kubeadm.go:401] StartCluster: {Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0124 17:45:44.684650 128080 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0124 17:45:44.705063 128080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0124 17:45:44.711927 128080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
I0124 17:45:44.711955 128080 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
I0124 17:45:44.711964 128080 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
I0124 17:45:44.712018 128080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0124 17:45:44.718747 128080 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0124 17:45:44.718796 128080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0124 17:45:44.725397 128080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0124 17:45:44.725416 128080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0124 17:45:44.725423 128080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0124 17:45:44.725431 128080 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0124 17:45:44.725456 128080 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0124 17:45:44.725483 128080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0124 17:45:44.769803 128080 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
I0124 17:45:44.769829 128080 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
I0124 17:45:44.769903 128080 kubeadm.go:322] [preflight] Running pre-flight checks
I0124 17:45:44.769917 128080 command_runner.go:130] > [preflight] Running pre-flight checks
I0124 17:45:44.802975 128080 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
I0124 17:45:44.803004 128080 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
I0124 17:45:44.803098 128080 kubeadm.go:322] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
I0124 17:45:44.803126 128080 command_runner.go:130] > [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
I0124 17:45:44.803181 128080 kubeadm.go:322] [0;37mOS[0m: [0;32mLinux[0m
I0124 17:45:44.803192 128080 command_runner.go:130] > [0;37mOS[0m: [0;32mLinux[0m
I0124 17:45:44.803249 128080 kubeadm.go:322] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0124 17:45:44.803260 128080 command_runner.go:130] > [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0124 17:45:44.803320 128080 kubeadm.go:322] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0124 17:45:44.803340 128080 command_runner.go:130] > [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0124 17:45:44.803422 128080 kubeadm.go:322] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0124 17:45:44.803434 128080 command_runner.go:130] > [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0124 17:45:44.803508 128080 kubeadm.go:322] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0124 17:45:44.803523 128080 command_runner.go:130] > [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0124 17:45:44.803591 128080 kubeadm.go:322] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0124 17:45:44.803602 128080 command_runner.go:130] > [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0124 17:45:44.803654 128080 kubeadm.go:322] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0124 17:45:44.803663 128080 command_runner.go:130] > [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0124 17:45:44.803719 128080 kubeadm.go:322] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0124 17:45:44.803728 128080 command_runner.go:130] > [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0124 17:45:44.803776 128080 kubeadm.go:322] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0124 17:45:44.803786 128080 command_runner.go:130] > [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0124 17:45:44.803833 128080 kubeadm.go:322] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0124 17:45:44.803842 128080 command_runner.go:130] > [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0124 17:45:44.867714 128080 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0124 17:45:44.867737 128080 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
I0124 17:45:44.867845 128080 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0124 17:45:44.867869 128080 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
I0124 17:45:44.867986 128080 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0124 17:45:44.867997 128080 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0124 17:45:44.993524 128080 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0124 17:45:44.993567 128080 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0124 17:45:44.997564 128080 out.go:204] - Generating certificates and keys ...
I0124 17:45:44.997635 128080 command_runner.go:130] > [certs] Using existing ca certificate authority
I0124 17:45:44.997676 128080 kubeadm.go:322] [certs] Using existing ca certificate authority
I0124 17:45:44.997786 128080 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0124 17:45:44.997807 128080 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0124 17:45:45.187295 128080 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0124 17:45:45.187328 128080 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
I0124 17:45:45.283882 128080 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0124 17:45:45.283911 128080 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
I0124 17:45:45.391350 128080 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0124 17:45:45.391374 128080 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
I0124 17:45:45.606365 128080 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0124 17:45:45.606387 128080 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
I0124 17:45:45.702268 128080 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0124 17:45:45.702297 128080 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
I0124 17:45:45.702438 128080 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-585561] and IPs [192.168.58.2 127.0.0.1 ::1]
I0124 17:45:45.702456 128080 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-585561] and IPs [192.168.58.2 127.0.0.1 ::1]
I0124 17:45:45.882805 128080 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0124 17:45:45.882832 128080 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
I0124 17:45:45.882989 128080 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-585561] and IPs [192.168.58.2 127.0.0.1 ::1]
I0124 17:45:45.883005 128080 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-585561] and IPs [192.168.58.2 127.0.0.1 ::1]
I0124 17:45:46.093046 128080 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0124 17:45:46.093078 128080 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
I0124 17:45:46.159501 128080 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0124 17:45:46.159524 128080 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
I0124 17:45:46.301027 128080 kubeadm.go:322] [certs] Generating "sa" key and public key
I0124 17:45:46.301055 128080 command_runner.go:130] > [certs] Generating "sa" key and public key
I0124 17:45:46.301189 128080 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0124 17:45:46.301210 128080 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0124 17:45:46.495395 128080 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0124 17:45:46.495441 128080 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0124 17:45:46.652582 128080 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0124 17:45:46.652632 128080 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0124 17:45:46.946653 128080 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0124 17:45:46.946697 128080 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0124 17:45:47.133062 128080 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0124 17:45:47.133093 128080 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0124 17:45:47.145766 128080 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0124 17:45:47.145792 128080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0124 17:45:47.146624 128080 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0124 17:45:47.146649 128080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0124 17:45:47.146695 128080 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0124 17:45:47.146723 128080 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0124 17:45:47.229918 128080 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0124 17:45:47.229945 128080 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0124 17:45:47.233377 128080 out.go:204] - Booting up control plane ...
I0124 17:45:47.233488 128080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0124 17:45:47.233513 128080 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0124 17:45:47.233641 128080 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0124 17:45:47.233657 128080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0124 17:45:47.234372 128080 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0124 17:45:47.234389 128080 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0124 17:45:47.235031 128080 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0124 17:45:47.235048 128080 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0124 17:45:47.236750 128080 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0124 17:45:47.236771 128080 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0124 17:45:56.738655 128080 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.501901 seconds
I0124 17:45:56.738717 128080 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.501901 seconds
I0124 17:45:56.738924 128080 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0124 17:45:56.738947 128080 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0124 17:45:56.751647 128080 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0124 17:45:56.751682 128080 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0124 17:45:57.269253 128080 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0124 17:45:57.269281 128080 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
I0124 17:45:57.269510 128080 kubeadm.go:322] [mark-control-plane] Marking the node multinode-585561 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0124 17:45:57.269568 128080 command_runner.go:130] > [mark-control-plane] Marking the node multinode-585561 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0124 17:45:57.778295 128080 kubeadm.go:322] [bootstrap-token] Using token: klbn66.lbpub6z14ok3qnmd
I0124 17:45:57.780154 128080 out.go:204] - Configuring RBAC rules ...
I0124 17:45:57.778369 128080 command_runner.go:130] > [bootstrap-token] Using token: klbn66.lbpub6z14ok3qnmd
I0124 17:45:57.780311 128080 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0124 17:45:57.780333 128080 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0124 17:45:57.783421 128080 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0124 17:45:57.783438 128080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0124 17:45:57.789469 128080 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0124 17:45:57.789493 128080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0124 17:45:57.791912 128080 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0124 17:45:57.791938 128080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0124 17:45:57.795719 128080 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0124 17:45:57.795747 128080 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0124 17:45:57.798166 128080 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0124 17:45:57.798189 128080 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0124 17:45:57.807852 128080 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0124 17:45:57.807927 128080 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0124 17:45:58.002344 128080 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0124 17:45:58.002373 128080 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0124 17:45:58.187864 128080 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0124 17:45:58.187890 128080 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0124 17:45:58.189197 128080 kubeadm.go:322]
I0124 17:45:58.189319 128080 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0124 17:45:58.189336 128080 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
I0124 17:45:58.189348 128080 kubeadm.go:322]
I0124 17:45:58.189467 128080 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0124 17:45:58.189495 128080 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
I0124 17:45:58.189509 128080 kubeadm.go:322]
I0124 17:45:58.189551 128080 kubeadm.go:322] mkdir -p $HOME/.kube
I0124 17:45:58.189561 128080 command_runner.go:130] > mkdir -p $HOME/.kube
I0124 17:45:58.189635 128080 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0124 17:45:58.189646 128080 command_runner.go:130] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0124 17:45:58.189731 128080 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0124 17:45:58.189747 128080 command_runner.go:130] > sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0124 17:45:58.189753 128080 kubeadm.go:322]
I0124 17:45:58.189836 128080 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0124 17:45:58.189847 128080 command_runner.go:130] > Alternatively, if you are the root user, you can run:
I0124 17:45:58.189852 128080 kubeadm.go:322]
I0124 17:45:58.189916 128080 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0124 17:45:58.189926 128080 command_runner.go:130] > export KUBECONFIG=/etc/kubernetes/admin.conf
I0124 17:45:58.189930 128080 kubeadm.go:322]
I0124 17:45:58.190004 128080 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0124 17:45:58.190015 128080 command_runner.go:130] > You should now deploy a pod network to the cluster.
I0124 17:45:58.190118 128080 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0124 17:45:58.190135 128080 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0124 17:45:58.190221 128080 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0124 17:45:58.190228 128080 command_runner.go:130] > https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0124 17:45:58.190233 128080 kubeadm.go:322]
I0124 17:45:58.190361 128080 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0124 17:45:58.190374 128080 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
I0124 17:45:58.190484 128080 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0124 17:45:58.190496 128080 command_runner.go:130] > and service account keys on each node and then running the following as root:
I0124 17:45:58.190501 128080 kubeadm.go:322]
I0124 17:45:58.190602 128080 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token klbn66.lbpub6z14ok3qnmd \
I0124 17:45:58.190613 128080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token klbn66.lbpub6z14ok3qnmd \
I0124 17:45:58.190738 128080 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 \
I0124 17:45:58.190750 128080 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 \
I0124 17:45:58.190776 128080 kubeadm.go:322] --control-plane
I0124 17:45:58.190782 128080 command_runner.go:130] > --control-plane
I0124 17:45:58.190793 128080 kubeadm.go:322]
I0124 17:45:58.190902 128080 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0124 17:45:58.190912 128080 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
I0124 17:45:58.190918 128080 kubeadm.go:322]
I0124 17:45:58.191017 128080 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token klbn66.lbpub6z14ok3qnmd \
I0124 17:45:58.191028 128080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token klbn66.lbpub6z14ok3qnmd \
I0124 17:45:58.191152 128080 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46
I0124 17:45:58.191161 128080 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46
I0124 17:45:58.237280 128080 kubeadm.go:322] W0124 17:45:44.762284 1917 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0124 17:45:58.237309 128080 command_runner.go:130] ! W0124 17:45:44.762284 1917 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0124 17:45:58.237600 128080 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
I0124 17:45:58.237636 128080 command_runner.go:130] ! [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
I0124 17:45:58.237795 128080 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0124 17:45:58.237810 128080 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0124 17:45:58.237829 128080 cni.go:84] Creating CNI manager for ""
I0124 17:45:58.237852 128080 cni.go:136] 1 nodes found, recommending kindnet
I0124 17:45:58.239768 128080 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0124 17:45:58.241228 128080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0124 17:45:58.245464 128080 command_runner.go:130] > File: /opt/cni/bin/portmap
I0124 17:45:58.245490 128080 command_runner.go:130] > Size: 2828728 Blocks: 5536 IO Block: 4096 regular file
I0124 17:45:58.245501 128080 command_runner.go:130] > Device: 34h/52d Inode: 535835 Links: 1
I0124 17:45:58.245512 128080 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0124 17:45:58.245526 128080 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
I0124 17:45:58.245535 128080 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
I0124 17:45:58.245542 128080 command_runner.go:130] > Change: 2023-01-24 17:29:00.473607792 +0000
I0124 17:45:58.245548 128080 command_runner.go:130] > Birth: -
I0124 17:45:58.245604 128080 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
I0124 17:45:58.245613 128080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0124 17:45:58.260810 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0124 17:45:58.922634 128080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
I0124 17:45:58.929361 128080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
I0124 17:45:58.935073 128080 command_runner.go:130] > serviceaccount/kindnet created
I0124 17:45:58.943119 128080 command_runner.go:130] > daemonset.apps/kindnet created
I0124 17:45:58.946499 128080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0124 17:45:58.946574 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:45:58.946595 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=6b2c057f52b907b52814c670e5ac26b018123ade minikube.k8s.io/name=multinode-585561 minikube.k8s.io/updated_at=2023_01_24T17_45_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:45:59.039306 128080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
I0124 17:45:59.043121 128080 command_runner.go:130] > -16
I0124 17:45:59.043156 128080 ops.go:34] apiserver oom_adj: -16
I0124 17:45:59.043251 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:45:59.054574 128080 command_runner.go:130] > node/multinode-585561 labeled
I0124 17:45:59.106915 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:45:59.607690 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:45:59.673704 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:00.107350 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:00.171767 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:00.607314 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:00.671479 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:01.108138 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:01.172113 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:01.607749 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:01.668888 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:02.107875 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:02.168102 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:02.607325 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:02.668589 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:03.108046 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:03.168986 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:03.608067 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:03.670289 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:04.107967 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:04.168901 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:04.607630 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:04.671033 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:05.107652 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:05.168730 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:05.607895 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:05.670809 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:06.107366 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:06.174207 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:06.607862 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:06.670424 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:07.108081 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:07.169397 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:07.607437 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:07.668571 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:08.107958 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:08.172977 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:08.607445 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:08.745915 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:09.107627 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:09.173030 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:09.607625 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:09.671485 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:10.108053 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:10.175202 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:10.607931 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:10.677701 128080 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0124 17:46:11.107282 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0124 17:46:11.237804 128080 command_runner.go:130] > NAME SECRETS AGE
I0124 17:46:11.237827 128080 command_runner.go:130] > default 0 1s
I0124 17:46:11.241506 128080 kubeadm.go:1073] duration metric: took 12.294989418s to wait for elevateKubeSystemPrivileges.
I0124 17:46:11.241541 128080 kubeadm.go:403] StartCluster complete in 26.557007789s
I0124 17:46:11.241565 128080 settings.go:142] acquiring lock: {Name:mkad36df43ddb11f4b3b585fb658d2ead0b2161f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:46:11.241636 128080 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15565-3637/kubeconfig
I0124 17:46:11.242500 128080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15565-3637/kubeconfig: {Name:mk90224603185dd0b148bed729b1c974f808bca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:46:11.243056 128080 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3637/kubeconfig
I0124 17:46:11.243366 128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0124 17:46:11.244092 128080 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0124 17:46:11.244103 128080 round_trippers.go:469] Request Headers:
I0124 17:46:11.244113 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:11.244122 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:11.244357 128080 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0124 17:46:11.244379 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0124 17:46:11.244418 128080 addons.go:65] Setting storage-provisioner=true in profile "multinode-585561"
I0124 17:46:11.244434 128080 addons.go:227] Setting addon storage-provisioner=true in "multinode-585561"
W0124 17:46:11.244441 128080 addons.go:236] addon storage-provisioner should already be in state true
I0124 17:46:11.244469 128080 cert_rotation.go:137] Starting client certificate rotation controller
I0124 17:46:11.244493 128080 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:46:11.244670 128080 addons.go:65] Setting default-storageclass=true in profile "multinode-585561"
I0124 17:46:11.244686 128080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-585561"
I0124 17:46:11.244761 128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:46:11.244957 128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:46:11.245001 128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:46:11.256720 128080 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
I0124 17:46:11.256748 128080 round_trippers.go:577] Response Headers:
I0124 17:46:11.256759 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:11.256768 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:11.256777 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:11.256785 128080 round_trippers.go:580] Content-Length: 291
I0124 17:46:11.256794 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:11 GMT
I0124 17:46:11.256803 128080 round_trippers.go:580] Audit-Id: 5cdaa95e-5c9a-406c-8dda-598965a63aeb
I0124 17:46:11.256810 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:11.256840 128080 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"350","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0124 17:46:11.257342 128080 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"350","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0124 17:46:11.257395 128080 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0124 17:46:11.257401 128080 round_trippers.go:469] Request Headers:
I0124 17:46:11.257412 128080 round_trippers.go:473] Content-Type: application/json
I0124 17:46:11.257422 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:11.257429 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:11.265014 128080 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0124 17:46:11.265043 128080 round_trippers.go:577] Response Headers:
I0124 17:46:11.265054 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:11 GMT
I0124 17:46:11.265070 128080 round_trippers.go:580] Audit-Id: f524da67-f4f6-4219-864c-9bef6dbb6092
I0124 17:46:11.265082 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:11.265096 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:11.265108 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:11.265121 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:11.265134 128080 round_trippers.go:580] Content-Length: 291
I0124 17:46:11.265164 128080 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"353","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0124 17:46:11.276945 128080 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0124 17:46:11.279090 128080 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0124 17:46:11.279120 128080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0124 17:46:11.279192 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:46:11.284897 128080 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3637/kubeconfig
I0124 17:46:11.285238 128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0124 17:46:11.285653 128080 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
I0124 17:46:11.285665 128080 round_trippers.go:469] Request Headers:
I0124 17:46:11.285676 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:11.285685 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:11.307656 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:46:11.338184 128080 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
I0124 17:46:11.338214 128080 round_trippers.go:577] Response Headers:
I0124 17:46:11.338226 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:11 GMT
I0124 17:46:11.338234 128080 round_trippers.go:580] Audit-Id: bf437a79-0171-4414-adb0-e7ee7e8c06e6
I0124 17:46:11.338241 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:11.338249 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:11.338260 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:11.338268 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:11.338276 128080 round_trippers.go:580] Content-Length: 109
I0124 17:46:11.338306 128080 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"357"},"items":[]}
I0124 17:46:11.338660 128080 addons.go:227] Setting addon default-storageclass=true in "multinode-585561"
W0124 17:46:11.338690 128080 addons.go:236] addon default-storageclass should already be in state true
I0124 17:46:11.338721 128080 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:46:11.339224 128080 cli_runner.go:164] Run: docker container inspect multinode-585561 --format={{.State.Status}}
I0124 17:46:11.369188 128080 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0124 17:46:11.369210 128080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0124 17:46:11.369276 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:46:11.398805 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:46:11.445597 128080 command_runner.go:130] > apiVersion: v1
I0124 17:46:11.445622 128080 command_runner.go:130] > data:
I0124 17:46:11.445629 128080 command_runner.go:130] > Corefile: |
I0124 17:46:11.445635 128080 command_runner.go:130] > .:53 {
I0124 17:46:11.445641 128080 command_runner.go:130] > errors
I0124 17:46:11.445649 128080 command_runner.go:130] > health {
I0124 17:46:11.445655 128080 command_runner.go:130] > lameduck 5s
I0124 17:46:11.445662 128080 command_runner.go:130] > }
I0124 17:46:11.445668 128080 command_runner.go:130] > ready
I0124 17:46:11.445681 128080 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0124 17:46:11.445691 128080 command_runner.go:130] > pods insecure
I0124 17:46:11.445700 128080 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0124 17:46:11.445711 128080 command_runner.go:130] > ttl 30
I0124 17:46:11.445719 128080 command_runner.go:130] > }
I0124 17:46:11.445726 128080 command_runner.go:130] > prometheus :9153
I0124 17:46:11.445733 128080 command_runner.go:130] > forward . /etc/resolv.conf {
I0124 17:46:11.445745 128080 command_runner.go:130] > max_concurrent 1000
I0124 17:46:11.445754 128080 command_runner.go:130] > }
I0124 17:46:11.445761 128080 command_runner.go:130] > cache 30
I0124 17:46:11.445776 128080 command_runner.go:130] > loop
I0124 17:46:11.445786 128080 command_runner.go:130] > reload
I0124 17:46:11.445792 128080 command_runner.go:130] > loadbalance
I0124 17:46:11.445800 128080 command_runner.go:130] > }
I0124 17:46:11.445809 128080 command_runner.go:130] > kind: ConfigMap
I0124 17:46:11.445819 128080 command_runner.go:130] > metadata:
I0124 17:46:11.445830 128080 command_runner.go:130] > creationTimestamp: "2023-01-24T17:45:57Z"
I0124 17:46:11.445839 128080 command_runner.go:130] > name: coredns
I0124 17:46:11.445852 128080 command_runner.go:130] > namespace: kube-system
I0124 17:46:11.445862 128080 command_runner.go:130] > resourceVersion: "234"
I0124 17:46:11.445870 128080 command_runner.go:130] > uid: 6a251b5a-c4e7-4c33-ac27-89bc13f50707
I0124 17:46:11.449387 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0124 17:46:11.552819 128080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0124 17:46:11.555718 128080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0124 17:46:11.766086 128080 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0124 17:46:11.766110 128080 round_trippers.go:469] Request Headers:
I0124 17:46:11.766122 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:11.766131 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:11.839048 128080 round_trippers.go:574] Response Status: 200 OK in 72 milliseconds
I0124 17:46:11.839079 128080 round_trippers.go:577] Response Headers:
I0124 17:46:11.839091 128080 round_trippers.go:580] Content-Length: 291
I0124 17:46:11.839100 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:11 GMT
I0124 17:46:11.839110 128080 round_trippers.go:580] Audit-Id: 5654b5dd-e997-4a21-aa30-931932c7b55f
I0124 17:46:11.839119 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:11.839143 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:11.839152 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:11.839161 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:11.839203 128080 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"362","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0124 17:46:11.839316 128080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-585561" context rescaled to 1 replicas
I0124 17:46:11.839348 128080 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0124 17:46:11.841837 128080 out.go:177] * Verifying Kubernetes components...
I0124 17:46:11.843840 128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0124 17:46:12.053650 128080 command_runner.go:130] > configmap/coredns replaced
I0124 17:46:12.058269 128080 start.go:908] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
I0124 17:46:12.258166 128080 command_runner.go:130] > storageclass.storage.k8s.io/standard created
I0124 17:46:12.258295 128080 command_runner.go:130] > serviceaccount/storage-provisioner created
I0124 17:46:12.258317 128080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
I0124 17:46:12.261200 128080 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0124 17:46:12.270920 128080 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0124 17:46:12.278019 128080 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
I0124 17:46:12.341770 128080 command_runner.go:130] > pod/storage-provisioner created
I0124 17:46:12.349060 128080 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0124 17:46:12.347563 128080 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3637/kubeconfig
I0124 17:46:12.350819 128080 addons.go:488] enableAddons completed in 1.106456266s
I0124 17:46:12.354715 128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0124 17:46:12.355351 128080 node_ready.go:35] waiting up to 6m0s for node "multinode-585561" to be "Ready" ...
I0124 17:46:12.355428 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:12.355439 128080 round_trippers.go:469] Request Headers:
I0124 17:46:12.355450 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:12.355463 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:12.357733 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:12.357777 128080 round_trippers.go:577] Response Headers:
I0124 17:46:12.357789 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:12.357803 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:12.357819 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:12.357831 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:12.357846 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:12 GMT
I0124 17:46:12.357859 128080 round_trippers.go:580] Audit-Id: da4b8883-0a53-42f5-bd59-9f5b92eb5e37
I0124 17:46:12.358027 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:12.358647 128080 node_ready.go:49] node "multinode-585561" has status "Ready":"True"
I0124 17:46:12.358664 128080 node_ready.go:38] duration metric: took 3.293273ms waiting for node "multinode-585561" to be "Ready" ...
I0124 17:46:12.358674 128080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0124 17:46:12.358752 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0124 17:46:12.358769 128080 round_trippers.go:469] Request Headers:
I0124 17:46:12.358780 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:12.358790 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:12.362209 128080 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0124 17:46:12.362234 128080 round_trippers.go:577] Response Headers:
I0124 17:46:12.362242 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:12.362247 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:12.362253 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:12.362258 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:12.362266 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:12 GMT
I0124 17:46:12.362272 128080 round_trippers.go:580] Audit-Id: 75b8da56-ebb4-4006-b382-354f188a10a6
I0124 17:46:12.362816 128080 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"375"},"items":[{"metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749
d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 60456 chars]
I0124 17:46:12.366111 128080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-5748b" in "kube-system" namespace to be "Ready" ...
I0124 17:46:12.366188 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
I0124 17:46:12.366198 128080 round_trippers.go:469] Request Headers:
I0124 17:46:12.366206 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:12.366216 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:12.371417 128080 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0124 17:46:12.371439 128080 round_trippers.go:577] Response Headers:
I0124 17:46:12.371448 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:12.371456 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:12.371468 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:12 GMT
I0124 17:46:12.371477 128080 round_trippers.go:580] Audit-Id: 5a557803-3f00-4921-87e1-53bf1bbbb7b8
I0124 17:46:12.371486 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:12.371499 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:12.371939 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
I0124 17:46:12.372424 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:12.372437 128080 round_trippers.go:469] Request Headers:
I0124 17:46:12.372445 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:12.372451 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:12.374387 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:12.374410 128080 round_trippers.go:577] Response Headers:
I0124 17:46:12.374420 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:12.374430 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:12 GMT
I0124 17:46:12.374442 128080 round_trippers.go:580] Audit-Id: 68d67445-5ac8-4709-a6d2-1ada528dd390
I0124 17:46:12.374458 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:12.374467 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:12.374477 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:12.374582 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:12.875303 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
I0124 17:46:12.875329 128080 round_trippers.go:469] Request Headers:
I0124 17:46:12.875341 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:12.875351 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:12.878007 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:12.878035 128080 round_trippers.go:577] Response Headers:
I0124 17:46:12.878046 128080 round_trippers.go:580] Audit-Id: cc041326-b604-4316-9e60-4c454a29d35d
I0124 17:46:12.878055 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:12.878064 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:12.878074 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:12.878086 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:12.878095 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:12 GMT
I0124 17:46:12.878225 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
I0124 17:46:12.878682 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:12.878697 128080 round_trippers.go:469] Request Headers:
I0124 17:46:12.878707 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:12.878716 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:12.880865 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:12.880894 128080 round_trippers.go:577] Response Headers:
I0124 17:46:12.880903 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:12 GMT
I0124 17:46:12.880911 128080 round_trippers.go:580] Audit-Id: 5228e599-067b-4a85-af43-7a0fc70a71cf
I0124 17:46:12.880919 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:12.880929 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:12.880940 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:12.880951 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:12.881066 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:13.375648 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
I0124 17:46:13.375675 128080 round_trippers.go:469] Request Headers:
I0124 17:46:13.375688 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:13.375697 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:13.378104 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:13.378128 128080 round_trippers.go:577] Response Headers:
I0124 17:46:13.378138 128080 round_trippers.go:580] Audit-Id: 8ca965c5-21d1-42a6-a678-42536fcf7366
I0124 17:46:13.378145 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:13.378153 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:13.378160 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:13.378168 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:13.378181 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:13 GMT
I0124 17:46:13.378281 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
I0124 17:46:13.378712 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:13.378723 128080 round_trippers.go:469] Request Headers:
I0124 17:46:13.378730 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:13.378736 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:13.380607 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:13.380654 128080 round_trippers.go:577] Response Headers:
I0124 17:46:13.380673 128080 round_trippers.go:580] Audit-Id: 202813aa-db60-40e2-a2dc-1204bcc7918e
I0124 17:46:13.380693 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:13.380709 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:13.380718 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:13.380728 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:13.380740 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:13 GMT
I0124 17:46:13.380868 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:13.875422 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
I0124 17:46:13.875446 128080 round_trippers.go:469] Request Headers:
I0124 17:46:13.875458 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:13.875469 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:13.878282 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:13.878310 128080 round_trippers.go:577] Response Headers:
I0124 17:46:13.878321 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:13 GMT
I0124 17:46:13.878328 128080 round_trippers.go:580] Audit-Id: cb1b7f8f-87b9-4009-ae78-360e70b61905
I0124 17:46:13.878336 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:13.878344 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:13.878352 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:13.878361 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:13.878516 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
I0124 17:46:13.879120 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:13.879138 128080 round_trippers.go:469] Request Headers:
I0124 17:46:13.879150 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:13.879159 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:13.881242 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:13.881261 128080 round_trippers.go:577] Response Headers:
I0124 17:46:13.881271 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:13.881278 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:13.881287 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:13 GMT
I0124 17:46:13.881295 128080 round_trippers.go:580] Audit-Id: 72db6b49-460f-4450-92db-7e2979a109c9
I0124 17:46:13.881302 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:13.881312 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:13.881434 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:14.376088 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
I0124 17:46:14.376111 128080 round_trippers.go:469] Request Headers:
I0124 17:46:14.376129 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:14.376138 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:14.378844 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:14.378876 128080 round_trippers.go:577] Response Headers:
I0124 17:46:14.378887 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:14.378897 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:14.378905 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:14.378913 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:14.378922 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:14 GMT
I0124 17:46:14.378931 128080 round_trippers.go:580] Audit-Id: ae4240bb-f051-4daa-82ef-fc98fcfe8084
I0124 17:46:14.379058 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-5748b","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"eec968db-c6da-4e2a-a20f-de7ed82a64cf","resourceVersion":"364","creationTimestamp":"2023-01-24T17:46:11Z","deletionTimestamp":"2023-01-24T17:46:41Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 5885 chars]
I0124 17:46:14.379629 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:14.379645 128080 round_trippers.go:469] Request Headers:
I0124 17:46:14.379656 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:14.379667 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:14.381887 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:14.381909 128080 round_trippers.go:577] Response Headers:
I0124 17:46:14.381935 128080 round_trippers.go:580] Audit-Id: 38667b85-4684-4d19-9d49-5d98172087be
I0124 17:46:14.381948 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:14.381960 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:14.381972 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:14.381983 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:14.381995 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:14 GMT
I0124 17:46:14.382131 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:14.382508 128080 pod_ready.go:102] pod "coredns-787d4945fb-5748b" in "kube-system" namespace has status "Ready":"False"
I0124 17:46:14.875279 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-5748b
I0124 17:46:14.875301 128080 round_trippers.go:469] Request Headers:
I0124 17:46:14.875311 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:14.875321 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:14.877630 128080 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
I0124 17:46:14.877650 128080 round_trippers.go:577] Response Headers:
I0124 17:46:14.877657 128080 round_trippers.go:580] Audit-Id: 88cafee3-f464-4b59-84f8-ef367572e7ad
I0124 17:46:14.877663 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:14.877668 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:14.877679 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:14.877685 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:14.877693 128080 round_trippers.go:580] Content-Length: 216
I0124 17:46:14.877700 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:14 GMT
I0124 17:46:14.877725 128080 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-5748b\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-5748b","kind":"pods"},"code":404}
I0124 17:46:14.877889 128080 pod_ready.go:97] error getting pod "coredns-787d4945fb-5748b" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-5748b" not found
I0124 17:46:14.877906 128080 pod_ready.go:81] duration metric: took 2.511771691s waiting for pod "coredns-787d4945fb-5748b" in "kube-system" namespace to be "Ready" ...
E0124 17:46:14.877916 128080 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-5748b" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-5748b" not found
I0124 17:46:14.877930 128080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace to be "Ready" ...
I0124 17:46:14.877990 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:14.877998 128080 round_trippers.go:469] Request Headers:
I0124 17:46:14.878005 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:14.878013 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:14.880246 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:14.880266 128080 round_trippers.go:577] Response Headers:
I0124 17:46:14.880274 128080 round_trippers.go:580] Audit-Id: 940ac1ec-8737-4632-a5a2-06c8c4e3ced3
I0124 17:46:14.880282 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:14.880294 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:14.880302 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:14.880310 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:14.880318 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:14 GMT
I0124 17:46:14.880441 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:14.881059 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:14.881082 128080 round_trippers.go:469] Request Headers:
I0124 17:46:14.881096 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:14.881104 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:14.883101 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:14.883121 128080 round_trippers.go:577] Response Headers:
I0124 17:46:14.883130 128080 round_trippers.go:580] Audit-Id: ea1669fc-d5aa-4c4c-bdd2-4af6705d6d4f
I0124 17:46:14.883138 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:14.883146 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:14.883155 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:14.883167 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:14.883182 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:14 GMT
I0124 17:46:14.883289 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:15.384293 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:15.384311 128080 round_trippers.go:469] Request Headers:
I0124 17:46:15.384320 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:15.384326 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:15.386563 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:15.386587 128080 round_trippers.go:577] Response Headers:
I0124 17:46:15.386595 128080 round_trippers.go:580] Audit-Id: c413c0be-794a-4ea6-b502-4dc6f6964015
I0124 17:46:15.386602 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:15.386611 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:15.386620 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:15.386633 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:15.386644 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:15 GMT
I0124 17:46:15.386768 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:15.387210 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:15.387224 128080 round_trippers.go:469] Request Headers:
I0124 17:46:15.387231 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:15.387237 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:15.389055 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:15.389077 128080 round_trippers.go:577] Response Headers:
I0124 17:46:15.389086 128080 round_trippers.go:580] Audit-Id: af3c6647-ee30-4bee-b064-6474453ab36c
I0124 17:46:15.389095 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:15.389103 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:15.389108 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:15.389114 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:15.389122 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:15 GMT
I0124 17:46:15.389247 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:15.883815 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:15.883835 128080 round_trippers.go:469] Request Headers:
I0124 17:46:15.883844 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:15.883850 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:15.886020 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:15.886046 128080 round_trippers.go:577] Response Headers:
I0124 17:46:15.886056 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:15.886064 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:15.886072 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:15.886079 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:15.886087 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:15 GMT
I0124 17:46:15.886096 128080 round_trippers.go:580] Audit-Id: a44c8b78-d512-4ca7-9d90-73d0b45e2a7d
I0124 17:46:15.886187 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:15.886627 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:15.886642 128080 round_trippers.go:469] Request Headers:
I0124 17:46:15.886650 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:15.886656 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:15.888435 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:15.888459 128080 round_trippers.go:577] Response Headers:
I0124 17:46:15.888466 128080 round_trippers.go:580] Audit-Id: 08e0ef81-d9ec-409f-9c3d-ec2870dfa238
I0124 17:46:15.888472 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:15.888477 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:15.888482 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:15.888487 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:15.888492 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:15 GMT
I0124 17:46:15.888685 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:16.384141 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:16.384161 128080 round_trippers.go:469] Request Headers:
I0124 17:46:16.384169 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:16.384176 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:16.386375 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:16.386401 128080 round_trippers.go:577] Response Headers:
I0124 17:46:16.386410 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:16.386417 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:16.386423 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:16 GMT
I0124 17:46:16.386431 128080 round_trippers.go:580] Audit-Id: 3a22b1f2-0462-49ac-aa98-6495658c05b5
I0124 17:46:16.386437 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:16.386445 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:16.386552 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:16.387027 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:16.387042 128080 round_trippers.go:469] Request Headers:
I0124 17:46:16.387049 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:16.387055 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:16.388772 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:16.388791 128080 round_trippers.go:577] Response Headers:
I0124 17:46:16.388798 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:16.388805 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:16 GMT
I0124 17:46:16.388810 128080 round_trippers.go:580] Audit-Id: 4ca7226d-76d9-4a77-8734-861d3a15195d
I0124 17:46:16.388826 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:16.388839 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:16.388850 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:16.388965 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:16.884670 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:16.884690 128080 round_trippers.go:469] Request Headers:
I0124 17:46:16.884699 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:16.884706 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:16.886775 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:16.886805 128080 round_trippers.go:577] Response Headers:
I0124 17:46:16.886816 128080 round_trippers.go:580] Audit-Id: 55765fb3-15d2-4efa-bb09-0de3f5ab5931
I0124 17:46:16.886825 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:16.886833 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:16.886840 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:16.886849 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:16.886859 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:16 GMT
I0124 17:46:16.886966 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:16.887385 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:16.887397 128080 round_trippers.go:469] Request Headers:
I0124 17:46:16.887404 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:16.887410 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:16.889089 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:16.889107 128080 round_trippers.go:577] Response Headers:
I0124 17:46:16.889113 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:16.889119 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:16.889124 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:16 GMT
I0124 17:46:16.889129 128080 round_trippers.go:580] Audit-Id: cbe9bdce-a7d6-4f16-913a-c9ecf43e8706
I0124 17:46:16.889134 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:16.889140 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:16.889278 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:16.889574 128080 pod_ready.go:102] pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace has status "Ready":"False"
I0124 17:46:17.383859 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:17.383880 128080 round_trippers.go:469] Request Headers:
I0124 17:46:17.383888 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:17.383894 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:17.386215 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:17.386247 128080 round_trippers.go:577] Response Headers:
I0124 17:46:17.386258 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:17 GMT
I0124 17:46:17.386267 128080 round_trippers.go:580] Audit-Id: 8c956872-b5ca-4ed1-a5a5-dc56b5e38ec7
I0124 17:46:17.386293 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:17.386305 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:17.386315 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:17.386324 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:17.386475 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:17.386940 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:17.386953 128080 round_trippers.go:469] Request Headers:
I0124 17:46:17.386960 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:17.386966 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:17.388766 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:17.388788 128080 round_trippers.go:577] Response Headers:
I0124 17:46:17.388797 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:17.388806 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:17.388815 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:17.388824 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:17 GMT
I0124 17:46:17.388841 128080 round_trippers.go:580] Audit-Id: ee94f717-d9ca-40a2-8215-1f90f0ac56c4
I0124 17:46:17.388850 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:17.388953 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:17.884657 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:17.884681 128080 round_trippers.go:469] Request Headers:
I0124 17:46:17.884689 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:17.884695 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:17.886917 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:17.886944 128080 round_trippers.go:577] Response Headers:
I0124 17:46:17.886954 128080 round_trippers.go:580] Audit-Id: 478d2475-62db-43cc-91af-13026422d22e
I0124 17:46:17.886963 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:17.886972 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:17.886980 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:17.886989 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:17.886994 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:17 GMT
I0124 17:46:17.887077 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:17.887512 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:17.887524 128080 round_trippers.go:469] Request Headers:
I0124 17:46:17.887531 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:17.887537 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:17.889214 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:17.889235 128080 round_trippers.go:577] Response Headers:
I0124 17:46:17.889244 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:17 GMT
I0124 17:46:17.889252 128080 round_trippers.go:580] Audit-Id: d9767526-9706-4906-9274-29b305037d36
I0124 17:46:17.889260 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:17.889272 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:17.889283 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:17.889292 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:17.889392 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:18.383995 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:18.384017 128080 round_trippers.go:469] Request Headers:
I0124 17:46:18.384025 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:18.384031 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:18.386312 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:18.386342 128080 round_trippers.go:577] Response Headers:
I0124 17:46:18.386352 128080 round_trippers.go:580] Audit-Id: b3169cd9-91e5-4b7d-b0fb-39d25ed6eaea
I0124 17:46:18.386360 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:18.386372 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:18.386388 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:18.386397 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:18.386408 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:18 GMT
I0124 17:46:18.386517 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:18.387048 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:18.387065 128080 round_trippers.go:469] Request Headers:
I0124 17:46:18.387077 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:18.387092 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:18.388772 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:18.388793 128080 round_trippers.go:577] Response Headers:
I0124 17:46:18.388802 128080 round_trippers.go:580] Audit-Id: 07d8bef4-63b1-41fc-bc08-a451252319c0
I0124 17:46:18.388810 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:18.388815 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:18.388820 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:18.388829 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:18.388837 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:18 GMT
I0124 17:46:18.388943 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"332","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:18.884607 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:18.884627 128080 round_trippers.go:469] Request Headers:
I0124 17:46:18.884636 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:18.884642 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:18.886715 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:18.886734 128080 round_trippers.go:577] Response Headers:
I0124 17:46:18.886741 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:18 GMT
I0124 17:46:18.886747 128080 round_trippers.go:580] Audit-Id: 3ab135f8-9f0b-4db7-b22a-738f29bccccd
I0124 17:46:18.886752 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:18.886757 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:18.886762 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:18.886767 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:18.886897 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:18.887336 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:18.887347 128080 round_trippers.go:469] Request Headers:
I0124 17:46:18.887355 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:18.887361 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:18.889083 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:18.889105 128080 round_trippers.go:577] Response Headers:
I0124 17:46:18.889114 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:18 GMT
I0124 17:46:18.889126 128080 round_trippers.go:580] Audit-Id: 0b42a62f-9b39-4c3b-bb5a-75c434690c59
I0124 17:46:18.889135 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:18.889145 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:18.889154 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:18.889160 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:18.889257 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:19.383850 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:19.383868 128080 round_trippers.go:469] Request Headers:
I0124 17:46:19.383877 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:19.383883 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:19.386512 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:19.386534 128080 round_trippers.go:577] Response Headers:
I0124 17:46:19.386542 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:19.386549 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:19.386558 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:19 GMT
I0124 17:46:19.386575 128080 round_trippers.go:580] Audit-Id: 0fc11ad0-729d-44ed-b7d1-2b8906765599
I0124 17:46:19.386583 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:19.386593 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:19.386715 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:19.387251 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:19.387265 128080 round_trippers.go:469] Request Headers:
I0124 17:46:19.387274 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:19.387287 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:19.389059 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:19.389081 128080 round_trippers.go:577] Response Headers:
I0124 17:46:19.389090 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:19.389099 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:19.389107 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:19.389116 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:19 GMT
I0124 17:46:19.389128 128080 round_trippers.go:580] Audit-Id: 3ba4caea-3e2b-4e5b-87c0-5b93f9b4aa61
I0124 17:46:19.389139 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:19.389235 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:19.389559 128080 pod_ready.go:102] pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace has status "Ready":"False"
I0124 17:46:19.883802 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:19.883834 128080 round_trippers.go:469] Request Headers:
I0124 17:46:19.883850 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:19.883860 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:19.886156 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:19.886182 128080 round_trippers.go:577] Response Headers:
I0124 17:46:19.886193 128080 round_trippers.go:580] Audit-Id: 347a1eae-e9ca-4a14-b52d-28006cac3924
I0124 17:46:19.886202 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:19.886209 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:19.886231 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:19.886243 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:19.886256 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:19 GMT
I0124 17:46:19.886364 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"373","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
I0124 17:46:19.886833 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:19.886847 128080 round_trippers.go:469] Request Headers:
I0124 17:46:19.886854 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:19.886860 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:19.888555 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:19.888577 128080 round_trippers.go:577] Response Headers:
I0124 17:46:19.888585 128080 round_trippers.go:580] Audit-Id: cba8f115-4fb9-48ad-8247-51bda1b13d4d
I0124 17:46:19.888591 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:19.888596 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:19.888602 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:19.888608 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:19.888620 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:19 GMT
I0124 17:46:19.888738 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:20.384344 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:20.384364 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.384373 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.384378 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.386534 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:20.386557 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.386567 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.386577 128080 round_trippers.go:580] Audit-Id: 6e617859-d68f-497e-ac28-252c7bd34b25
I0124 17:46:20.386586 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.386594 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.386600 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.386608 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.386729 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5901 chars]
I0124 17:46:20.387203 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:20.387216 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.387223 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.387230 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.388896 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.388912 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.388918 128080 round_trippers.go:580] Audit-Id: 29e61c99-e2b2-4074-804e-dc3c5622caa5
I0124 17:46:20.388928 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.388939 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.388946 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.388954 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.388966 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.389092 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:20.389413 128080 pod_ready.go:92] pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:20.389433 128080 pod_ready.go:81] duration metric: took 5.51149317s waiting for pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.389444 128080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.389488 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-585561
I0124 17:46:20.389495 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.389502 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.389512 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.391285 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.391306 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.391315 128080 round_trippers.go:580] Audit-Id: 08b89957-cd1b-435b-9368-d003f69af723
I0124 17:46:20.391324 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.391336 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.391347 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.391362 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.391378 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.391465 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-585561","namespace":"kube-system","uid":"e90a4912-09cf-4017-b275-36e5cbaf8fb7","resourceVersion":"307","creationTimestamp":"2023-01-24T17:45:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8dd19130fd729f0d2e0f77de0c35a9c6","kubernetes.io/config.mirror":"8dd19130fd729f0d2e0f77de0c35a9c6","kubernetes.io/config.seen":"2023-01-24T17:45:58.071793905Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5806 chars]
I0124 17:46:20.391863 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:20.391876 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.391882 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.391891 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.393484 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.393504 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.393513 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.393522 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.393533 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.393544 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.393555 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.393564 128080 round_trippers.go:580] Audit-Id: 9d517b27-925a-470b-88a5-106c5a23187a
I0124 17:46:20.393690 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:20.393955 128080 pod_ready.go:92] pod "etcd-multinode-585561" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:20.393965 128080 pod_ready.go:81] duration metric: took 4.514247ms waiting for pod "etcd-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.393976 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.394011 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-585561
I0124 17:46:20.394018 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.394025 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.394031 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.395611 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.395630 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.395641 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.395650 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.395659 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.395664 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.395674 128080 round_trippers.go:580] Audit-Id: 92a0a131-6edc-4951-9b5e-e6a42480379b
I0124 17:46:20.395680 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.395778 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-585561","namespace":"kube-system","uid":"b6111d69-e414-4456-b981-c45749f2bc69","resourceVersion":"270","creationTimestamp":"2023-01-24T17:45:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6933d1a0858d027c0aa46d814d0f153","kubernetes.io/config.mirror":"e6933d1a0858d027c0aa46d814d0f153","kubernetes.io/config.seen":"2023-01-24T17:45:58.071829413Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
I0124 17:46:20.396163 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:20.396176 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.396188 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.396197 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.397602 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.397620 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.397631 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.397638 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.397646 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.397654 128080 round_trippers.go:580] Audit-Id: d58babc9-9368-4cc7-857b-5e9ec5fa997c
I0124 17:46:20.397664 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.397676 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.397763 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:20.398120 128080 pod_ready.go:92] pod "kube-apiserver-multinode-585561" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:20.398135 128080 pod_ready.go:81] duration metric: took 4.153168ms waiting for pod "kube-apiserver-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.398145 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.398196 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-585561
I0124 17:46:20.398207 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.398217 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.398229 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.399799 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.399825 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.399834 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.399840 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.399846 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.399854 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.399860 128080 round_trippers.go:580] Audit-Id: 13064886-e712-451b-8ec0-faadd272e681
I0124 17:46:20.399867 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.400012 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-585561","namespace":"kube-system","uid":"64983300-9251-4324-9c8a-e0ff30ae4238","resourceVersion":"385","creationTimestamp":"2023-01-24T17:45:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"667326c0e187035b2101d4ba5b407378","kubernetes.io/config.mirror":"667326c0e187035b2101d4ba5b407378","kubernetes.io/config.seen":"2023-01-24T17:45:47.607485043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
I0124 17:46:20.400426 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:20.400439 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.400445 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.400451 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.401834 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.401853 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.401864 128080 round_trippers.go:580] Audit-Id: 1d611837-4e9a-4ab3-a31c-5611ca6544be
I0124 17:46:20.401872 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.401877 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.401882 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.401888 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.401897 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.401984 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:20.402241 128080 pod_ready.go:92] pod "kube-controller-manager-multinode-585561" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:20.402247 128080 pod_ready.go:81] duration metric: took 4.093647ms waiting for pod "kube-controller-manager-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.402255 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxrvx" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.402287 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrvx
I0124 17:46:20.402290 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.402297 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.402303 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.403747 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.403765 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.403776 128080 round_trippers.go:580] Audit-Id: 34077473-fdc5-4cd9-a206-58d2e6eb561d
I0124 17:46:20.403784 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.403797 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.403813 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.403824 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.403831 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.403943 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wxrvx","generateName":"kube-proxy-","namespace":"kube-system","uid":"435cbf4e-148f-46a7-894c-73bea3a2bb9c","resourceVersion":"386","creationTimestamp":"2023-01-24T17:46:10Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
I0124 17:46:20.404298 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:20.404310 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.404317 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.404323 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.405746 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:20.405764 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.405773 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.405782 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.405790 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.405802 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.405813 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.405823 128080 round_trippers.go:580] Audit-Id: 5caf9958-5dbd-409f-8628-ecf17330c11d
I0124 17:46:20.405906 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:20.406176 128080 pod_ready.go:92] pod "kube-proxy-wxrvx" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:20.406188 128080 pod_ready.go:81] duration metric: took 3.928229ms waiting for pod "kube-proxy-wxrvx" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.406195 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.584563 128080 request.go:622] Waited for 178.269204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-585561
I0124 17:46:20.584621 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-585561
I0124 17:46:20.584640 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.584648 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.584655 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.586797 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:20.586826 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.586836 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.586844 128080 round_trippers.go:580] Audit-Id: fcf79f37-f1fe-418f-b173-0e39c46a871b
I0124 17:46:20.586852 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.586861 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.586869 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.586876 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.586969 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-585561","namespace":"kube-system","uid":"99936e13-49bf-4ab3-82ea-812373f654b6","resourceVersion":"291","creationTimestamp":"2023-01-24T17:45:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9db8e3e7879313b6e801011c12e1db82","kubernetes.io/config.mirror":"9db8e3e7879313b6e801011c12e1db82","kubernetes.io/config.seen":"2023-01-24T17:45:47.607460620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
I0124 17:46:20.784718 128080 request.go:622] Waited for 197.352781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:20.784783 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:20.784788 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.784801 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.784812 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.786986 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:20.787013 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.787023 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.787031 128080 round_trippers.go:580] Audit-Id: b6d78ce3-520f-4b89-8d8d-a9f8728cabd2
I0124 17:46:20.787039 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.787047 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.787055 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.787065 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.787161 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5208 chars]
I0124 17:46:20.787497 128080 pod_ready.go:92] pod "kube-scheduler-multinode-585561" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:20.787509 128080 pod_ready.go:81] duration metric: took 381.309469ms waiting for pod "kube-scheduler-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:20.787519 128080 pod_ready.go:38] duration metric: took 8.428834418s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0124 17:46:20.787536 128080 api_server.go:51] waiting for apiserver process to appear ...
I0124 17:46:20.787613 128080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0124 17:46:20.797528 128080 command_runner.go:130] > 2612
I0124 17:46:20.797565 128080 api_server.go:71] duration metric: took 8.958192044s to wait for apiserver process to appear ...
I0124 17:46:20.797584 128080 api_server.go:87] waiting for apiserver healthz status ...
I0124 17:46:20.797595 128080 api_server.go:252] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
I0124 17:46:20.800990 128080 api_server.go:278] https://192.168.58.2:8443/healthz returned 200:
ok
I0124 17:46:20.801052 128080 round_trippers.go:463] GET https://192.168.58.2:8443/version
I0124 17:46:20.801063 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.801075 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.801089 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.801726 128080 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0124 17:46:20.801741 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.801748 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.801754 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.801759 128080 round_trippers.go:580] Content-Length: 263
I0124 17:46:20.801765 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.801773 128080 round_trippers.go:580] Audit-Id: caa658d8-bf09-4459-8140-61696daba67d
I0124 17:46:20.801778 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.801788 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.801804 128080 request.go:1171] Response Body: {
"major": "1",
"minor": "26",
"gitVersion": "v1.26.1",
"gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
"gitTreeState": "clean",
"buildDate": "2023-01-18T15:51:25Z",
"goVersion": "go1.19.5",
"compiler": "gc",
"platform": "linux/amd64"
}
I0124 17:46:20.801875 128080 api_server.go:140] control plane version: v1.26.1
I0124 17:46:20.801887 128080 api_server.go:130] duration metric: took 4.299117ms to wait for apiserver health ...
I0124 17:46:20.801894 128080 system_pods.go:43] waiting for kube-system pods to appear ...
I0124 17:46:20.985277 128080 request.go:622] Waited for 183.329754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0124 17:46:20.985347 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0124 17:46:20.985384 128080 round_trippers.go:469] Request Headers:
I0124 17:46:20.985399 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:20.985410 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:20.988263 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:20.988290 128080 round_trippers.go:577] Response Headers:
I0124 17:46:20.988301 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:20.988319 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:20.988328 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:20 GMT
I0124 17:46:20.988334 128080 round_trippers.go:580] Audit-Id: cf858d98-7d36-4bcd-8adf-c08e3b82112d
I0124 17:46:20.988342 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:20.988349 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:20.988790 128080 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54922 chars]
I0124 17:46:20.990484 128080 system_pods.go:59] 8 kube-system pods found
I0124 17:46:20.990503 128080 system_pods.go:61] "coredns-787d4945fb-lfdwf" [3ad6d110-548d-4cec-bae8-945a1e7d7853] Running
I0124 17:46:20.990508 128080 system_pods.go:61] "etcd-multinode-585561" [e90a4912-09cf-4017-b275-36e5cbaf8fb7] Running
I0124 17:46:20.990515 128080 system_pods.go:61] "kindnet-4zggw" [17440d73-e612-44ea-a341-3d018744042f] Running
I0124 17:46:20.990519 128080 system_pods.go:61] "kube-apiserver-multinode-585561" [b6111d69-e414-4456-b981-c45749f2bc69] Running
I0124 17:46:20.990526 128080 system_pods.go:61] "kube-controller-manager-multinode-585561" [64983300-9251-4324-9c8a-e0ff30ae4238] Running
I0124 17:46:20.990530 128080 system_pods.go:61] "kube-proxy-wxrvx" [435cbf4e-148f-46a7-894c-73bea3a2bb9c] Running
I0124 17:46:20.990535 128080 system_pods.go:61] "kube-scheduler-multinode-585561" [99936e13-49bf-4ab3-82ea-812373f654b6] Running
I0124 17:46:20.990541 128080 system_pods.go:61] "storage-provisioner" [f521d253-9340-4d51-b6da-fa5443e09527] Running
I0124 17:46:20.990546 128080 system_pods.go:74] duration metric: took 188.648411ms to wait for pod list to return data ...
I0124 17:46:20.990557 128080 default_sa.go:34] waiting for default service account to be created ...
I0124 17:46:21.185033 128080 request.go:622] Waited for 194.413978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
I0124 17:46:21.185083 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
I0124 17:46:21.185089 128080 round_trippers.go:469] Request Headers:
I0124 17:46:21.185097 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:21.185107 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:21.187323 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:21.187344 128080 round_trippers.go:577] Response Headers:
I0124 17:46:21.187351 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:21.187357 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:21.187363 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:21.187368 128080 round_trippers.go:580] Content-Length: 261
I0124 17:46:21.187373 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:21 GMT
I0124 17:46:21.187381 128080 round_trippers.go:580] Audit-Id: 015c9f4b-5f6d-4c61-976a-399fcd3f6df6
I0124 17:46:21.187386 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:21.187408 128080 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"15e42089-c766-4da1-bcdd-80148829615f","resourceVersion":"329","creationTimestamp":"2023-01-24T17:46:10Z"}}]}
I0124 17:46:21.187565 128080 default_sa.go:45] found service account: "default"
I0124 17:46:21.187577 128080 default_sa.go:55] duration metric: took 197.0119ms for default service account to be created ...
I0124 17:46:21.187586 128080 system_pods.go:116] waiting for k8s-apps to be running ...
I0124 17:46:21.384805 128080 request.go:622] Waited for 197.148231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0124 17:46:21.384950 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0124 17:46:21.384979 128080 round_trippers.go:469] Request Headers:
I0124 17:46:21.384992 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:21.385006 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:21.387897 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:21.387922 128080 round_trippers.go:577] Response Headers:
I0124 17:46:21.387934 128080 round_trippers.go:580] Audit-Id: 9c05e512-ec8a-4da9-8ec1-9fc849724916
I0124 17:46:21.387940 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:21.387950 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:21.387959 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:21.387968 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:21.387980 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:21 GMT
I0124 17:46:21.388356 128080 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54922 chars]
I0124 17:46:21.390057 128080 system_pods.go:86] 8 kube-system pods found
I0124 17:46:21.390090 128080 system_pods.go:89] "coredns-787d4945fb-lfdwf" [3ad6d110-548d-4cec-bae8-945a1e7d7853] Running
I0124 17:46:21.390096 128080 system_pods.go:89] "etcd-multinode-585561" [e90a4912-09cf-4017-b275-36e5cbaf8fb7] Running
I0124 17:46:21.390100 128080 system_pods.go:89] "kindnet-4zggw" [17440d73-e612-44ea-a341-3d018744042f] Running
I0124 17:46:21.390107 128080 system_pods.go:89] "kube-apiserver-multinode-585561" [b6111d69-e414-4456-b981-c45749f2bc69] Running
I0124 17:46:21.390112 128080 system_pods.go:89] "kube-controller-manager-multinode-585561" [64983300-9251-4324-9c8a-e0ff30ae4238] Running
I0124 17:46:21.390117 128080 system_pods.go:89] "kube-proxy-wxrvx" [435cbf4e-148f-46a7-894c-73bea3a2bb9c] Running
I0124 17:46:21.390121 128080 system_pods.go:89] "kube-scheduler-multinode-585561" [99936e13-49bf-4ab3-82ea-812373f654b6] Running
I0124 17:46:21.390127 128080 system_pods.go:89] "storage-provisioner" [f521d253-9340-4d51-b6da-fa5443e09527] Running
I0124 17:46:21.390133 128080 system_pods.go:126] duration metric: took 202.542832ms to wait for k8s-apps to be running ...
I0124 17:46:21.390141 128080 system_svc.go:44] waiting for kubelet service to be running ....
I0124 17:46:21.390180 128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0124 17:46:21.399718 128080 system_svc.go:56] duration metric: took 9.569922ms WaitForService to wait for kubelet.
I0124 17:46:21.399739 128080 kubeadm.go:578] duration metric: took 9.560367944s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0124 17:46:21.399757 128080 node_conditions.go:102] verifying NodePressure condition ...
I0124 17:46:21.585188 128080 request.go:622] Waited for 185.358538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
I0124 17:46:21.585240 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
I0124 17:46:21.585245 128080 round_trippers.go:469] Request Headers:
I0124 17:46:21.585253 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:21.585259 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:21.587376 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:21.587398 128080 round_trippers.go:577] Response Headers:
I0124 17:46:21.587409 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:21.587419 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:21.587430 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:21.587443 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:21.587452 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:21 GMT
I0124 17:46:21.587458 128080 round_trippers.go:580] Audit-Id: f6fbab4c-0ab2-405b-92c7-e741f3432606
I0124 17:46:21.587556 128080 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"426"},"items":[{"metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"412","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5261 chars]
I0124 17:46:21.588052 128080 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0124 17:46:21.588075 128080 node_conditions.go:123] node cpu capacity is 8
I0124 17:46:21.588089 128080 node_conditions.go:105] duration metric: took 188.325958ms to run NodePressure ...
I0124 17:46:21.588105 128080 start.go:226] waiting for startup goroutines ...
I0124 17:46:21.590706 128080 out.go:177]
I0124 17:46:21.592588 128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:46:21.592702 128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:46:21.594822 128080 out.go:177] * Starting worker node multinode-585561-m02 in cluster multinode-585561
I0124 17:46:21.596265 128080 cache.go:120] Beginning downloading kic base image for docker with docker
I0124 17:46:21.597851 128080 out.go:177] * Pulling base image ...
I0124 17:46:21.599887 128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0124 17:46:21.599916 128080 cache.go:57] Caching tarball of preloaded images
I0124 17:46:21.599989 128080 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
I0124 17:46:21.600028 128080 preload.go:174] Found /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0124 17:46:21.600038 128080 cache.go:60] Finished verifying existence of preloaded tar for v1.26.1 on docker
I0124 17:46:21.600133 128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:46:21.625597 128080 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
I0124 17:46:21.625617 128080 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
I0124 17:46:21.625641 128080 cache.go:193] Successfully downloaded all kic artifacts
I0124 17:46:21.625676 128080 start.go:364] acquiring machines lock for multinode-585561-m02: {Name:mkf9f5cd760f22fd0c5ef803f9e297631aab81d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0124 17:46:21.625790 128080 start.go:368] acquired machines lock for "multinode-585561-m02" in 94.878µs
I0124 17:46:21.625821 128080 start.go:93] Provisioning new machine with config: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0124 17:46:21.625908 128080 start.go:125] createHost starting for "m02" (driver="docker")
I0124 17:46:21.629423 128080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0124 17:46:21.629553 128080 start.go:159] libmachine.API.Create for "multinode-585561" (driver="docker")
I0124 17:46:21.629585 128080 client.go:168] LocalClient.Create starting
I0124 17:46:21.629672 128080 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem
I0124 17:46:21.629706 128080 main.go:141] libmachine: Decoding PEM data...
I0124 17:46:21.629723 128080 main.go:141] libmachine: Parsing certificate...
I0124 17:46:21.629773 128080 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem
I0124 17:46:21.629794 128080 main.go:141] libmachine: Decoding PEM data...
I0124 17:46:21.629805 128080 main.go:141] libmachine: Parsing certificate...
I0124 17:46:21.630003 128080 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0124 17:46:21.652201 128080 network_create.go:76] Found existing network {name:multinode-585561 subnet:0xc0010ec180 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
I0124 17:46:21.652254 128080 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-585561-m02" container
I0124 17:46:21.652314 128080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0124 17:46:21.673644 128080 cli_runner.go:164] Run: docker volume create multinode-585561-m02 --label name.minikube.sigs.k8s.io=multinode-585561-m02 --label created_by.minikube.sigs.k8s.io=true
I0124 17:46:21.696938 128080 oci.go:103] Successfully created a docker volume multinode-585561-m02
I0124 17:46:21.697007 128080 cli_runner.go:164] Run: docker run --rm --name multinode-585561-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-585561-m02 --entrypoint /usr/bin/test -v multinode-585561-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
I0124 17:46:22.233519 128080 oci.go:107] Successfully prepared a docker volume multinode-585561-m02
I0124 17:46:22.233553 128080 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0124 17:46:22.233571 128080 kic.go:190] Starting extracting preloaded images to volume ...
I0124 17:46:22.233649 128080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-585561-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
I0124 17:46:27.669677 128080 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15565-3637/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-585561-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (5.43597092s)
I0124 17:46:27.669726 128080 kic.go:199] duration metric: took 5.436150 seconds to extract preloaded images to volume
W0124 17:46:27.669858 128080 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0124 17:46:27.669961 128080 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0124 17:46:27.764315 128080 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-585561-m02 --name multinode-585561-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-585561-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-585561-m02 --network multinode-585561 --ip 192.168.58.3 --volume multinode-585561-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
I0124 17:46:28.127108 128080 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Running}}
I0124 17:46:28.152113 128080 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Status}}
I0124 17:46:28.179018 128080 cli_runner.go:164] Run: docker exec multinode-585561-m02 stat /var/lib/dpkg/alternatives/iptables
I0124 17:46:28.228386 128080 oci.go:144] the created container "multinode-585561-m02" has a running status.
I0124 17:46:28.228422 128080 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa...
I0124 17:46:28.453417 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0124 17:46:28.453456 128080 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0124 17:46:28.529815 128080 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Status}}
I0124 17:46:28.558374 128080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0124 17:46:28.558401 128080 kic_runner.go:114] Args: [docker exec --privileged multinode-585561-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
I0124 17:46:28.635508 128080 cli_runner.go:164] Run: docker container inspect multinode-585561-m02 --format={{.State.Status}}
I0124 17:46:28.660755 128080 machine.go:88] provisioning docker machine ...
I0124 17:46:28.660792 128080 ubuntu.go:169] provisioning hostname "multinode-585561-m02"
I0124 17:46:28.660865 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:28.684684 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:46:28.684859 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0124 17:46:28.684875 128080 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-585561-m02 && echo "multinode-585561-m02" | sudo tee /etc/hostname
I0124 17:46:28.826545 128080 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-585561-m02
I0124 17:46:28.826632 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:28.851597 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:46:28.851770 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0124 17:46:28.851798 128080 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-585561-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-585561-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-585561-m02' | sudo tee -a /etc/hosts;
fi
fi
I0124 17:46:28.984415 128080 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0124 17:46:28.984451 128080 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15565-3637/.minikube CaCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15565-3637/.minikube}
I0124 17:46:28.984466 128080 ubuntu.go:177] setting up certificates
I0124 17:46:28.984473 128080 provision.go:83] configureAuth start
I0124 17:46:28.984561 128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m02
I0124 17:46:29.008459 128080 provision.go:138] copyHostCerts
I0124 17:46:29.008526 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
I0124 17:46:29.008556 128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem, removing ...
I0124 17:46:29.008567 128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem
I0124 17:46:29.008643 128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/cert.pem (1123 bytes)
I0124 17:46:29.008738 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
I0124 17:46:29.008756 128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem, removing ...
I0124 17:46:29.008760 128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem
I0124 17:46:29.008784 128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/key.pem (1679 bytes)
I0124 17:46:29.008825 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
I0124 17:46:29.008838 128080 exec_runner.go:144] found /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem, removing ...
I0124 17:46:29.008844 128080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem
I0124 17:46:29.008863 128080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15565-3637/.minikube/ca.pem (1078 bytes)
I0124 17:46:29.008904 128080 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem org=jenkins.multinode-585561-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-585561-m02]
I0124 17:46:29.066247 128080 provision.go:172] copyRemoteCerts
I0124 17:46:29.066297 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0124 17:46:29.066330 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:29.090529 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
I0124 17:46:29.187623 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0124 17:46:29.187688 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0124 17:46:29.204477 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem -> /etc/docker/server.pem
I0124 17:46:29.204555 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0124 17:46:29.221516 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0124 17:46:29.221578 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0124 17:46:29.238801 128080 provision.go:86] duration metric: configureAuth took 254.318022ms
I0124 17:46:29.238830 128080 ubuntu.go:193] setting minikube options for container-runtime
I0124 17:46:29.239005 128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:46:29.239054 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:29.262199 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:46:29.262383 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0124 17:46:29.262402 128080 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0124 17:46:29.392680 128080 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0124 17:46:29.392712 128080 ubuntu.go:71] root file system type: overlay
I0124 17:46:29.392935 128080 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0124 17:46:29.392998 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:29.416887 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:46:29.417037 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0124 17:46:29.417098 128080 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.58.2"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0124 17:46:29.557121 128080 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.58.2
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0124 17:46:29.557187 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:29.582230 128080 main.go:141] libmachine: Using SSH client type: native
I0124 17:46:29.582371 128080 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1980] 0x7f4b00 <nil> [] 0s} 127.0.0.1 32857 <nil> <nil>}
I0124 17:46:29.582391 128080 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0124 17:46:30.222175 128080 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-12-15 22:25:58.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-01-24 17:46:29.552301730 +0000
@@ -1,30 +1,33 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+Environment=NO_PROXY=192.168.58.2
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +35,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0124 17:46:30.222205 128080 machine.go:91] provisioned docker machine in 1.56142627s
I0124 17:46:30.222217 128080 client.go:171] LocalClient.Create took 8.592619612s
I0124 17:46:30.222229 128080 start.go:167] duration metric: libmachine.API.Create for "multinode-585561" took 8.592676152s
I0124 17:46:30.222237 128080 start.go:300] post-start starting for "multinode-585561-m02" (driver="docker")
I0124 17:46:30.222244 128080 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0124 17:46:30.222302 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0124 17:46:30.222343 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:30.248305 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
I0124 17:46:30.340451 128080 ssh_runner.go:195] Run: cat /etc/os-release
I0124 17:46:30.343294 128080 command_runner.go:130] > NAME="Ubuntu"
I0124 17:46:30.343330 128080 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
I0124 17:46:30.343335 128080 command_runner.go:130] > ID=ubuntu
I0124 17:46:30.343340 128080 command_runner.go:130] > ID_LIKE=debian
I0124 17:46:30.343345 128080 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
I0124 17:46:30.343349 128080 command_runner.go:130] > VERSION_ID="20.04"
I0124 17:46:30.343362 128080 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
I0124 17:46:30.343371 128080 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
I0124 17:46:30.343384 128080 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I0124 17:46:30.343399 128080 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I0124 17:46:30.343409 128080 command_runner.go:130] > VERSION_CODENAME=focal
I0124 17:46:30.343419 128080 command_runner.go:130] > UBUNTU_CODENAME=focal
I0124 17:46:30.343470 128080 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0124 17:46:30.343485 128080 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0124 17:46:30.343493 128080 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0124 17:46:30.343499 128080 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0124 17:46:30.343510 128080 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/addons for local assets ...
I0124 17:46:30.343562 128080 filesync.go:126] Scanning /home/jenkins/minikube-integration/15565-3637/.minikube/files for local assets ...
I0124 17:46:30.343616 128080 filesync.go:149] local asset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> 101262.pem in /etc/ssl/certs
I0124 17:46:30.343625 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> /etc/ssl/certs/101262.pem
I0124 17:46:30.343688 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0124 17:46:30.350606 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:46:30.368431 128080 start.go:303] post-start completed in 146.178177ms
I0124 17:46:30.368831 128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m02
I0124 17:46:30.392658 128080 profile.go:148] Saving config to /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/config.json ...
I0124 17:46:30.392914 128080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0124 17:46:30.392951 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:30.415793 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
I0124 17:46:30.504850 128080 command_runner.go:130] > 23%!
(MISSING)I0124 17:46:30.504919 128080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0124 17:46:30.508408 128080 command_runner.go:130] > 225G
I0124 17:46:30.508596 128080 start.go:128] duration metric: createHost completed in 8.882674072s
I0124 17:46:30.508619 128080 start.go:83] releasing machines lock for "multinode-585561-m02", held for 8.882814181s
I0124 17:46:30.508680 128080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-585561-m02
I0124 17:46:30.534666 128080 out.go:177] * Found network options:
I0124 17:46:30.536354 128080 out.go:177] - NO_PROXY=192.168.58.2
W0124 17:46:30.537678 128080 proxy.go:119] fail to check proxy env: Error ip not in block
W0124 17:46:30.537736 128080 proxy.go:119] fail to check proxy env: Error ip not in block
I0124 17:46:30.537807 128080 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0124 17:46:30.537845 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:30.537921 128080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0124 17:46:30.537971 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561-m02
I0124 17:46:30.564602 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
I0124 17:46:30.565920 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561-m02/id_rsa Username:docker}
I0124 17:46:30.652966 128080 command_runner.go:130] > File: /etc/cni/net.d/200-loopback.conf
I0124 17:46:30.653005 128080 command_runner.go:130] > Size: 54 Blocks: 8 IO Block: 4096 regular file
I0124 17:46:30.653015 128080 command_runner.go:130] > Device: e3h/227d Inode: 538245 Links: 1
I0124 17:46:30.653022 128080 command_runner.go:130] > Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
I0124 17:46:30.653028 128080 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
I0124 17:46:30.653033 128080 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
I0124 17:46:30.653037 128080 command_runner.go:130] > Change: 2023-01-24 17:29:01.213660493 +0000
I0124 17:46:30.653041 128080 command_runner.go:130] > Birth: -
I0124 17:46:30.681368 128080 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0124 17:46:30.682818 128080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0124 17:46:30.703226 128080 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0124 17:46:30.703353 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0124 17:46:30.710062 128080 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0124 17:46:30.722812 128080 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0124 17:46:30.738206 128080 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf,
I0124 17:46:30.738246 128080 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0124 17:46:30.738259 128080 start.go:472] detecting cgroup driver to use...
I0124 17:46:30.738295 128080 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:46:30.738451 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:46:30.751102 128080 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0124 17:46:30.751129 128080 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
I0124 17:46:30.751194 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0124 17:46:30.758786 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0124 17:46:30.766465 128080 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0124 17:46:30.766521 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0124 17:46:30.774158 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:46:30.781590 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0124 17:46:30.789295 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0124 17:46:30.796961 128080 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0124 17:46:30.803813 128080 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0124 17:46:30.811556 128080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0124 17:46:30.817890 128080 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0124 17:46:30.817965 128080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0124 17:46:30.824328 128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:46:30.895621 128080 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0124 17:46:30.978862 128080 start.go:472] detecting cgroup driver to use...
I0124 17:46:30.978911 128080 detect.go:158] detected "cgroupfs" cgroup driver on host os
I0124 17:46:30.978956 128080 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0124 17:46:30.989300 128080 command_runner.go:130] > # /lib/systemd/system/docker.service
I0124 17:46:30.989325 128080 command_runner.go:130] > [Unit]
I0124 17:46:30.989335 128080 command_runner.go:130] > Description=Docker Application Container Engine
I0124 17:46:30.989343 128080 command_runner.go:130] > Documentation=https://docs.docker.com
I0124 17:46:30.989354 128080 command_runner.go:130] > BindsTo=containerd.service
I0124 17:46:30.989364 128080 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
I0124 17:46:30.989373 128080 command_runner.go:130] > Wants=network-online.target
I0124 17:46:30.989387 128080 command_runner.go:130] > Requires=docker.socket
I0124 17:46:30.989397 128080 command_runner.go:130] > StartLimitBurst=3
I0124 17:46:30.989407 128080 command_runner.go:130] > StartLimitIntervalSec=60
I0124 17:46:30.989413 128080 command_runner.go:130] > [Service]
I0124 17:46:30.989422 128080 command_runner.go:130] > Type=notify
I0124 17:46:30.989432 128080 command_runner.go:130] > Restart=on-failure
I0124 17:46:30.989443 128080 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
I0124 17:46:30.989458 128080 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0124 17:46:30.989475 128080 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0124 17:46:30.989488 128080 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0124 17:46:30.989503 128080 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0124 17:46:30.989517 128080 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0124 17:46:30.989531 128080 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0124 17:46:30.989545 128080 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0124 17:46:30.989562 128080 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0124 17:46:30.989576 128080 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0124 17:46:30.989586 128080 command_runner.go:130] > ExecStart=
I0124 17:46:30.989610 128080 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
I0124 17:46:30.989621 128080 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0124 17:46:30.989633 128080 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0124 17:46:30.989646 128080 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0124 17:46:30.989652 128080 command_runner.go:130] > LimitNOFILE=infinity
I0124 17:46:30.989661 128080 command_runner.go:130] > LimitNPROC=infinity
I0124 17:46:30.989670 128080 command_runner.go:130] > LimitCORE=infinity
I0124 17:46:30.989682 128080 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0124 17:46:30.989693 128080 command_runner.go:130] > # Only systemd 226 and above support this version.
I0124 17:46:30.989702 128080 command_runner.go:130] > TasksMax=infinity
I0124 17:46:30.989709 128080 command_runner.go:130] > TimeoutStartSec=0
I0124 17:46:30.989722 128080 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0124 17:46:30.989733 128080 command_runner.go:130] > Delegate=yes
I0124 17:46:30.989747 128080 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0124 17:46:30.989758 128080 command_runner.go:130] > KillMode=process
I0124 17:46:30.989768 128080 command_runner.go:130] > [Install]
I0124 17:46:30.989775 128080 command_runner.go:130] > WantedBy=multi-user.target
I0124 17:46:30.989800 128080 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0124 17:46:30.989838 128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0124 17:46:30.998964 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0124 17:46:31.012139 128080 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0124 17:46:31.012171 128080 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
I0124 17:46:31.013058 128080 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0124 17:46:31.093404 128080 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0124 17:46:31.184202 128080 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0124 17:46:31.184231 128080 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0124 17:46:31.198354 128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:46:31.284809 128080 ssh_runner.go:195] Run: sudo systemctl restart docker
I0124 17:46:31.487366 128080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:46:31.564876 128080 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
I0124 17:46:31.564948 128080 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0124 17:46:31.634912 128080 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0124 17:46:31.711104 128080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0124 17:46:31.783648 128080 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0124 17:46:31.794622 128080 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0124 17:46:31.794685 128080 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0124 17:46:31.797837 128080 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0124 17:46:31.797859 128080 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0124 17:46:31.797868 128080 command_runner.go:130] > Device: ech/236d Inode: 206 Links: 1
I0124 17:46:31.797879 128080 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 999/ docker)
I0124 17:46:31.797893 128080 command_runner.go:130] > Access: 2023-01-24 17:46:31.788460948 +0000
I0124 17:46:31.797903 128080 command_runner.go:130] > Modify: 2023-01-24 17:46:31.788460948 +0000
I0124 17:46:31.797912 128080 command_runner.go:130] > Change: 2023-01-24 17:46:31.792461232 +0000
I0124 17:46:31.797921 128080 command_runner.go:130] > Birth: -
I0124 17:46:31.797948 128080 start.go:540] Will wait 60s for crictl version
I0124 17:46:31.797987 128080 ssh_runner.go:195] Run: which crictl
I0124 17:46:31.800421 128080 command_runner.go:130] > /usr/bin/crictl
I0124 17:46:31.800636 128080 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0124 17:46:31.888765 128080 command_runner.go:130] > Version: 0.1.0
I0124 17:46:31.888788 128080 command_runner.go:130] > RuntimeName: docker
I0124 17:46:31.888794 128080 command_runner.go:130] > RuntimeVersion: 20.10.22
I0124 17:46:31.888800 128080 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0124 17:46:31.890395 128080 start.go:556] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.22
RuntimeApiVersion: v1alpha2
I0124 17:46:31.890445 128080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:46:31.917315 128080 command_runner.go:130] > 20.10.22
I0124 17:46:31.917381 128080 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0124 17:46:31.945338 128080 command_runner.go:130] > 20.10.22
I0124 17:46:31.951194 128080 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
I0124 17:46:31.952812 128080 out.go:177] - env NO_PROXY=192.168.58.2
I0124 17:46:31.954263 128080 cli_runner.go:164] Run: docker network inspect multinode-585561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0124 17:46:31.977477 128080 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0124 17:46:31.980857 128080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:46:31.989900 128080 certs.go:56] Setting up /home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561 for IP: 192.168.58.3
I0124 17:46:31.989931 128080 certs.go:186] acquiring lock for shared ca certs: {Name:mk1dc62d6b43bec706eb6ba5de0c4f61edad78b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0124 17:46:31.990057 128080 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key
I0124 17:46:31.990090 128080 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key
I0124 17:46:31.990103 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0124 17:46:31.990113 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0124 17:46:31.990124 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0124 17:46:31.990134 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0124 17:46:31.990181 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem (1338 bytes)
W0124 17:46:31.990210 128080 certs.go:397] ignoring /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126_empty.pem, impossibly tiny 0 bytes
I0124 17:46:31.990219 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca-key.pem (1675 bytes)
I0124 17:46:31.990240 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/ca.pem (1078 bytes)
I0124 17:46:31.990261 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/cert.pem (1123 bytes)
I0124 17:46:31.990280 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/home/jenkins/minikube-integration/15565-3637/.minikube/certs/key.pem (1679 bytes)
I0124 17:46:31.990320 128080 certs.go:401] found cert: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem (1708 bytes)
I0124 17:46:31.990344 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem -> /usr/share/ca-certificates/10126.pem
I0124 17:46:31.990355 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem -> /usr/share/ca-certificates/101262.pem
I0124 17:46:31.990365 128080 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0124 17:46:31.990751 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0124 17:46:32.007864 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0124 17:46:32.024362 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0124 17:46:32.041021 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0124 17:46:32.057521 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/certs/10126.pem --> /usr/share/ca-certificates/10126.pem (1338 bytes)
I0124 17:46:32.074154 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/files/etc/ssl/certs/101262.pem --> /usr/share/ca-certificates/101262.pem (1708 bytes)
I0124 17:46:32.091213 128080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0124 17:46:32.108020 128080 ssh_runner.go:195] Run: openssl version
I0124 17:46:32.112413 128080 command_runner.go:130] > OpenSSL 1.1.1f 31 Mar 2020
I0124 17:46:32.112532 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0124 17:46:32.119496 128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0124 17:46:32.122638 128080 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
I0124 17:46:32.122706 128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 24 17:29 /usr/share/ca-certificates/minikubeCA.pem
I0124 17:46:32.122770 128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0124 17:46:32.127189 128080 command_runner.go:130] > b5213941
I0124 17:46:32.127363 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0124 17:46:32.134494 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10126.pem && ln -fs /usr/share/ca-certificates/10126.pem /etc/ssl/certs/10126.pem"
I0124 17:46:32.141565 128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10126.pem
I0124 17:46:32.144437 128080 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
I0124 17:46:32.144456 128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 24 17:32 /usr/share/ca-certificates/10126.pem
I0124 17:46:32.144492 128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10126.pem
I0124 17:46:32.148977 128080 command_runner.go:130] > 51391683
I0124 17:46:32.149030 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10126.pem /etc/ssl/certs/51391683.0"
I0124 17:46:32.156508 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101262.pem && ln -fs /usr/share/ca-certificates/101262.pem /etc/ssl/certs/101262.pem"
I0124 17:46:32.165557 128080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101262.pem
I0124 17:46:32.168669 128080 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
I0124 17:46:32.168742 128080 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 24 17:32 /usr/share/ca-certificates/101262.pem
I0124 17:46:32.168795 128080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101262.pem
I0124 17:46:32.173480 128080 command_runner.go:130] > 3ec20f2e
I0124 17:46:32.173540 128080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101262.pem /etc/ssl/certs/3ec20f2e.0"
I0124 17:46:32.180694 128080 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0124 17:46:32.245327 128080 command_runner.go:130] > cgroupfs
I0124 17:46:32.246641 128080 cni.go:84] Creating CNI manager for ""
I0124 17:46:32.246655 128080 cni.go:136] 2 nodes found, recommending kindnet
I0124 17:46:32.246665 128080 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0124 17:46:32.246680 128080 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-585561 NodeName:multinode-585561-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0124 17:46:32.246792 128080 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.3
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "multinode-585561-m02"
kubeletExtraArgs:
node-ip: 192.168.58.3
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0124 17:46:32.246888 128080 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-585561-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
[Install]
config:
{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0124 17:46:32.246936 128080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
I0124 17:46:32.253621 128080 command_runner.go:130] > kubeadm
I0124 17:46:32.253639 128080 command_runner.go:130] > kubectl
I0124 17:46:32.253669 128080 command_runner.go:130] > kubelet
I0124 17:46:32.254172 128080 binaries.go:44] Found k8s binaries, skipping transfer
I0124 17:46:32.254230 128080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0124 17:46:32.260899 128080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
I0124 17:46:32.273147 128080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0124 17:46:32.285590 128080 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0124 17:46:32.288442 128080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0124 17:46:32.297638 128080 host.go:66] Checking if "multinode-585561" exists ...
I0124 17:46:32.297886 128080 config.go:180] Loaded profile config "multinode-585561": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0124 17:46:32.297915 128080 start.go:288] JoinCluster: &{Name:multinode-585561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-585561 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0124 17:46:32.298021 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0124 17:46:32.298060 128080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-585561
I0124 17:46:32.321952 128080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15565-3637/.minikube/machines/multinode-585561/id_rsa Username:docker}
I0124 17:46:32.745020 128080 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zt8ug3.1xbm3om5dnjax0pw --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46
I0124 17:46:32.745095 128080 start.go:309] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0124 17:46:32.745129 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zt8ug3.1xbm3om5dnjax0pw --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m02"
I0124 17:46:32.780280 128080 command_runner.go:130] > [preflight] Running pre-flight checks
I0124 17:46:32.807385 128080 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
I0124 17:46:32.807411 128080 command_runner.go:130] > [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1027-gcp[0m
I0124 17:46:32.807420 128080 command_runner.go:130] > [0;37mOS[0m: [0;32mLinux[0m
I0124 17:46:32.807427 128080 command_runner.go:130] > [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0124 17:46:32.807433 128080 command_runner.go:130] > [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0124 17:46:32.807438 128080 command_runner.go:130] > [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0124 17:46:32.807443 128080 command_runner.go:130] > [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0124 17:46:32.807448 128080 command_runner.go:130] > [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0124 17:46:32.807452 128080 command_runner.go:130] > [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0124 17:46:32.807462 128080 command_runner.go:130] > [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0124 17:46:32.807472 128080 command_runner.go:130] > [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0124 17:46:32.807476 128080 command_runner.go:130] > [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0124 17:46:32.885368 128080 command_runner.go:130] > [preflight] Reading configuration from the cluster...
I0124 17:46:32.885392 128080 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0124 17:46:32.916258 128080 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0124 17:46:32.916285 128080 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0124 17:46:32.916291 128080 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0124 17:46:32.996754 128080 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0124 17:46:34.515170 128080 command_runner.go:130] > This node has joined the cluster:
I0124 17:46:34.515247 128080 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
I0124 17:46:34.515261 128080 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
I0124 17:46:34.515272 128080 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
I0124 17:46:34.517967 128080 command_runner.go:130] ! W0124 17:46:32.779878 1352 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0124 17:46:34.518001 128080 command_runner.go:130] ! [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1027-gcp\n", err: exit status 1
I0124 17:46:34.518016 128080 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0124 17:46:34.518043 128080 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zt8ug3.1xbm3om5dnjax0pw --discovery-token-ca-cert-hash sha256:c095b0be75e56fca304f9ba33fcd9e9da2689ec75ccd518e7b8d3c504090ed46 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-585561-m02": (1.772900386s)
I0124 17:46:34.518066 128080 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0124 17:46:34.673090 128080 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
I0124 17:46:34.673122 128080 start.go:290] JoinCluster complete in 2.375206941s
I0124 17:46:34.673134 128080 cni.go:84] Creating CNI manager for ""
I0124 17:46:34.673141 128080 cni.go:136] 2 nodes found, recommending kindnet
I0124 17:46:34.673199 128080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0124 17:46:34.676417 128080 command_runner.go:130] > File: /opt/cni/bin/portmap
I0124 17:46:34.676441 128080 command_runner.go:130] > Size: 2828728 Blocks: 5536 IO Block: 4096 regular file
I0124 17:46:34.676452 128080 command_runner.go:130] > Device: 34h/52d Inode: 535835 Links: 1
I0124 17:46:34.676461 128080 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0124 17:46:34.676475 128080 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
I0124 17:46:34.676487 128080 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
I0124 17:46:34.676507 128080 command_runner.go:130] > Change: 2023-01-24 17:29:00.473607792 +0000
I0124 17:46:34.676514 128080 command_runner.go:130] > Birth: -
I0124 17:46:34.676563 128080 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
I0124 17:46:34.676578 128080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I0124 17:46:34.689706 128080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0124 17:46:34.866008 128080 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0124 17:46:34.869059 128080 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0124 17:46:34.871106 128080 command_runner.go:130] > serviceaccount/kindnet unchanged
I0124 17:46:34.880587 128080 command_runner.go:130] > daemonset.apps/kindnet configured
I0124 17:46:34.884679 128080 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3637/kubeconfig
I0124 17:46:34.884894 128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0124 17:46:34.885165 128080 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0124 17:46:34.885175 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.885183 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.885189 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.886855 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.886871 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.886878 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.886883 128080 round_trippers.go:580] Audit-Id: 497f5b19-393d-4b16-893c-1a13ae3475f1
I0124 17:46:34.886888 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.886893 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.886899 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.886907 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.886917 128080 round_trippers.go:580] Content-Length: 291
I0124 17:46:34.886949 128080 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"af865015-1135-4b27-bdb3-fded1d2259a8","resourceVersion":"425","creationTimestamp":"2023-01-24T17:45:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0124 17:46:34.887042 128080 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-585561" context rescaled to 1 replicas
I0124 17:46:34.887072 128080 start.go:221] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0124 17:46:34.891190 128080 out.go:177] * Verifying Kubernetes components...
I0124 17:46:34.892753 128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0124 17:46:34.902365 128080 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/15565-3637/kubeconfig
I0124 17:46:34.902604 128080 kapi.go:59] client config for multinode-585561: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.crt", KeyFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/profiles/multinode-585561/client.key", CAFile:"/home/jenkins/minikube-integration/15565-3637/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1889220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0124 17:46:34.902881 128080 node_ready.go:35] waiting up to 6m0s for node "multinode-585561-m02" to be "Ready" ...
I0124 17:46:34.902939 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
I0124 17:46:34.902950 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.902961 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.902971 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.905107 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:34.905129 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.905141 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.905150 128080 round_trippers.go:580] Audit-Id: d8139444-727b-4f5f-bbc8-eb054eef8fe7
I0124 17:46:34.905163 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.905177 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.905186 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.905195 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.905314 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
I0124 17:46:34.905658 128080 node_ready.go:49] node "multinode-585561-m02" has status "Ready":"True"
I0124 17:46:34.905673 128080 node_ready.go:38] duration metric: took 2.777735ms waiting for node "multinode-585561-m02" to be "Ready" ...
I0124 17:46:34.905682 128080 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0124 17:46:34.905749 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
I0124 17:46:34.905760 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.905771 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.905780 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.908622 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:34.908642 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.908652 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.908661 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.908669 128080 round_trippers.go:580] Audit-Id: 79701e73-3f1c-4466-99c3-cd0d2cee8949
I0124 17:46:34.908682 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.908694 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.908704 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.910359 128080 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"468"},"items":[{"metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65261 chars]
I0124 17:46:34.913016 128080 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace to be "Ready" ...
I0124 17:46:34.913075 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-lfdwf
I0124 17:46:34.913084 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.913091 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.913097 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.914706 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.914721 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.914731 128080 round_trippers.go:580] Audit-Id: e7b2d476-b7a8-4125-b812-f0dc6ea3efa1
I0124 17:46:34.914757 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.914770 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.914779 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.914790 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.914801 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.914924 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-lfdwf","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"3ad6d110-548d-4cec-bae8-945a1e7d7853","resourceVersion":"421","creationTimestamp":"2023-01-24T17:46:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"af494a86-3985-4d3d-b5b8-a2cb2749d659","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af494a86-3985-4d3d-b5b8-a2cb2749d659\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5901 chars]
I0124 17:46:34.915322 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:34.915334 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.915342 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.915348 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.917048 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.917063 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.917069 128080 round_trippers.go:580] Audit-Id: 7b8544d5-7c5b-41d4-a28e-160f995249fc
I0124 17:46:34.917075 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.917080 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.917085 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.917092 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.917100 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.917221 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
I0124 17:46:34.917486 128080 pod_ready.go:92] pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:34.917495 128080 pod_ready.go:81] duration metric: took 4.459856ms waiting for pod "coredns-787d4945fb-lfdwf" in "kube-system" namespace to be "Ready" ...
I0124 17:46:34.917504 128080 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:34.917543 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-585561
I0124 17:46:34.917550 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.917557 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.917565 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.919030 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.919043 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.919050 128080 round_trippers.go:580] Audit-Id: 3b87f326-ba3e-47f2-8be8-98a00127b8a3
I0124 17:46:34.919056 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.919064 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.919075 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.919085 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.919100 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.919178 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-585561","namespace":"kube-system","uid":"e90a4912-09cf-4017-b275-36e5cbaf8fb7","resourceVersion":"307","creationTimestamp":"2023-01-24T17:45:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8dd19130fd729f0d2e0f77de0c35a9c6","kubernetes.io/config.mirror":"8dd19130fd729f0d2e0f77de0c35a9c6","kubernetes.io/config.seen":"2023-01-24T17:45:58.071793905Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5806 chars]
I0124 17:46:34.919507 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:34.919519 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.919526 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.919532 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.921109 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.921129 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.921139 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.921148 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.921157 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.921166 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.921178 128080 round_trippers.go:580] Audit-Id: 802452fc-eef0-42bd-b7f5-9b7b735dd955
I0124 17:46:34.921188 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.921312 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
I0124 17:46:34.921668 128080 pod_ready.go:92] pod "etcd-multinode-585561" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:34.921679 128080 pod_ready.go:81] duration metric: took 4.167237ms waiting for pod "etcd-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:34.921691 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:34.921728 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-585561
I0124 17:46:34.921739 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.921746 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.921755 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.923272 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.923289 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.923298 128080 round_trippers.go:580] Audit-Id: 384d6620-f3dc-41f6-9df5-dbbaca36ccaa
I0124 17:46:34.923307 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.923319 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.923331 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.923343 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.923354 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.923466 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-585561","namespace":"kube-system","uid":"b6111d69-e414-4456-b981-c45749f2bc69","resourceVersion":"270","creationTimestamp":"2023-01-24T17:45:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6933d1a0858d027c0aa46d814d0f153","kubernetes.io/config.mirror":"e6933d1a0858d027c0aa46d814d0f153","kubernetes.io/config.seen":"2023-01-24T17:45:58.071829413Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
I0124 17:46:34.923874 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:34.923887 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.923894 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.923900 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.925292 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.925310 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.925322 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.925330 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.925341 128080 round_trippers.go:580] Audit-Id: c5e7c1a0-f4ec-442e-952d-a055413f47d7
I0124 17:46:34.925354 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.925361 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.925369 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.925433 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
I0124 17:46:34.925680 128080 pod_ready.go:92] pod "kube-apiserver-multinode-585561" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:34.925689 128080 pod_ready.go:81] duration metric: took 3.992929ms waiting for pod "kube-apiserver-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:34.925697 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:34.925730 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-585561
I0124 17:46:34.925742 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.925748 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.925757 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.927091 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.927111 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.927120 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.927130 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.927140 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.927150 128080 round_trippers.go:580] Audit-Id: 1da2c34e-098b-4751-9996-757f61d88dde
I0124 17:46:34.927163 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.927174 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.927260 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-585561","namespace":"kube-system","uid":"64983300-9251-4324-9c8a-e0ff30ae4238","resourceVersion":"385","creationTimestamp":"2023-01-24T17:45:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"667326c0e187035b2101d4ba5b407378","kubernetes.io/config.mirror":"667326c0e187035b2101d4ba5b407378","kubernetes.io/config.seen":"2023-01-24T17:45:47.607485043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
I0124 17:46:34.927616 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:34.927628 128080 round_trippers.go:469] Request Headers:
I0124 17:46:34.927635 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:34.927641 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:34.929053 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:34.929071 128080 round_trippers.go:577] Response Headers:
I0124 17:46:34.929081 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:34 GMT
I0124 17:46:34.929090 128080 round_trippers.go:580] Audit-Id: 33fd5bb9-63ee-49e7-be22-f13c8137dae6
I0124 17:46:34.929098 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:34.929106 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:34.929116 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:34.929128 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:34.929242 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
I0124 17:46:34.929489 128080 pod_ready.go:92] pod "kube-controller-manager-multinode-585561" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:34.929498 128080 pod_ready.go:81] duration metric: took 3.796338ms waiting for pod "kube-controller-manager-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:34.929505 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-txqvw" in "kube-system" namespace to be "Ready" ...
I0124 17:46:35.103905 128080 request.go:622] Waited for 174.333925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
I0124 17:46:35.103967 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
I0124 17:46:35.103974 128080 round_trippers.go:469] Request Headers:
I0124 17:46:35.103985 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:35.103996 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:35.106149 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:35.106171 128080 round_trippers.go:577] Response Headers:
I0124 17:46:35.106180 128080 round_trippers.go:580] Audit-Id: 99042336-cd88-47a2-b43b-213db072d145
I0124 17:46:35.106188 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:35.106195 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:35.106203 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:35.106213 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:35.106226 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:35 GMT
I0124 17:46:35.106328 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-txqvw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9184a5e-fb76-46e1-b029-9c0bb6a55a8f","resourceVersion":"458","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
I0124 17:46:35.302962 128080 request.go:622] Waited for 196.275858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
I0124 17:46:35.303023 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
I0124 17:46:35.303027 128080 round_trippers.go:469] Request Headers:
I0124 17:46:35.303048 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:35.303056 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:35.305148 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:35.305173 128080 round_trippers.go:577] Response Headers:
I0124 17:46:35.305181 128080 round_trippers.go:580] Audit-Id: fb8464b5-b29c-4d87-ab16-798859c01b1c
I0124 17:46:35.305187 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:35.305195 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:35.305204 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:35.305216 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:35.305228 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:35 GMT
I0124 17:46:35.305345 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
I0124 17:46:35.806505 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
I0124 17:46:35.806526 128080 round_trippers.go:469] Request Headers:
I0124 17:46:35.806534 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:35.806540 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:35.808640 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:35.808661 128080 round_trippers.go:577] Response Headers:
I0124 17:46:35.808668 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:35.808674 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:35.808679 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:35.808684 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:35 GMT
I0124 17:46:35.808689 128080 round_trippers.go:580] Audit-Id: e696418c-adcb-4b81-88da-d3011d643212
I0124 17:46:35.808696 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:35.808800 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-txqvw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9184a5e-fb76-46e1-b029-9c0bb6a55a8f","resourceVersion":"458","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
I0124 17:46:35.809158 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
I0124 17:46:35.809172 128080 round_trippers.go:469] Request Headers:
I0124 17:46:35.809181 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:35.809187 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:35.810803 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:35.810826 128080 round_trippers.go:577] Response Headers:
I0124 17:46:35.810835 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:35.810844 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:35.810853 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:35.810862 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:35 GMT
I0124 17:46:35.810870 128080 round_trippers.go:580] Audit-Id: 49f42aa1-5ed5-48d8-a566-6c63c175fb19
I0124 17:46:35.810883 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:35.810948 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
I0124 17:46:36.306643 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
I0124 17:46:36.306672 128080 round_trippers.go:469] Request Headers:
I0124 17:46:36.306684 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:36.306711 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:36.308954 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:36.308982 128080 round_trippers.go:577] Response Headers:
I0124 17:46:36.308992 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:36.309000 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:36.309008 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:36.309017 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:36.309026 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:36 GMT
I0124 17:46:36.309041 128080 round_trippers.go:580] Audit-Id: 27b22d5b-9a69-4d5d-a35e-d2783f53de23
I0124 17:46:36.309148 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-txqvw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9184a5e-fb76-46e1-b029-9c0bb6a55a8f","resourceVersion":"471","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
I0124 17:46:36.309567 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
I0124 17:46:36.309576 128080 round_trippers.go:469] Request Headers:
I0124 17:46:36.309583 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:36.309590 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:36.311658 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:36.311681 128080 round_trippers.go:577] Response Headers:
I0124 17:46:36.311690 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:36 GMT
I0124 17:46:36.311699 128080 round_trippers.go:580] Audit-Id: 10d56837-ad6e-4dc7-9076-f2437d09e638
I0124 17:46:36.311706 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:36.311724 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:36.311732 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:36.311745 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:36.311842 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
I0124 17:46:36.805879 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-txqvw
I0124 17:46:36.805900 128080 round_trippers.go:469] Request Headers:
I0124 17:46:36.805908 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:36.805914 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:36.807989 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:36.808013 128080 round_trippers.go:577] Response Headers:
I0124 17:46:36.808023 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:36.808032 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:36 GMT
I0124 17:46:36.808041 128080 round_trippers.go:580] Audit-Id: 587e119f-2e6c-48e3-8322-0e9f1e8eb42a
I0124 17:46:36.808049 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:36.808054 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:36.808062 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:36.808187 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-txqvw","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9184a5e-fb76-46e1-b029-9c0bb6a55a8f","resourceVersion":"480","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0124 17:46:36.808695 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561-m02
I0124 17:46:36.808708 128080 round_trippers.go:469] Request Headers:
I0124 17:46:36.808715 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:36.808721 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:36.810522 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:36.810537 128080 round_trippers.go:577] Response Headers:
I0124 17:46:36.810543 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:36.810549 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:36.810554 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:36.810559 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:36.810567 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:36 GMT
I0124 17:46:36.810575 128080 round_trippers.go:580] Audit-Id: f8e0330a-1bb9-40b1-a933-c5f0a1e1bb6b
I0124 17:46:36.810647 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561-m02","uid":"76657508-fda5-4f8e-bafd-a20797fda9b4","resourceVersion":"468","creationTimestamp":"2023-01-24T17:46:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:33Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4070 chars]
I0124 17:46:36.810929 128080 pod_ready.go:92] pod "kube-proxy-txqvw" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:36.810951 128080 pod_ready.go:81] duration metric: took 1.881438802s waiting for pod "kube-proxy-txqvw" in "kube-system" namespace to be "Ready" ...
I0124 17:46:36.810962 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxrvx" in "kube-system" namespace to be "Ready" ...
I0124 17:46:36.811016 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxrvx
I0124 17:46:36.811030 128080 round_trippers.go:469] Request Headers:
I0124 17:46:36.811037 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:36.811043 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:36.812833 128080 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0124 17:46:36.812853 128080 round_trippers.go:577] Response Headers:
I0124 17:46:36.812863 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:36 GMT
I0124 17:46:36.812872 128080 round_trippers.go:580] Audit-Id: b13a419b-1fb9-47ba-b3f4-b04fb85c822a
I0124 17:46:36.812882 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:36.812901 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:36.812916 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:36.812930 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:36.813030 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wxrvx","generateName":"kube-proxy-","namespace":"kube-system","uid":"435cbf4e-148f-46a7-894c-73bea3a2bb9c","resourceVersion":"386","creationTimestamp":"2023-01-24T17:46:10Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"915ecedf-5a94-48f1-af3d-5180b7c6a87a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:46:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915ecedf-5a94-48f1-af3d-5180b7c6a87a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
I0124 17:46:36.903706 128080 request.go:622] Waited for 90.265119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:36.903760 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:36.903766 128080 round_trippers.go:469] Request Headers:
I0124 17:46:36.903774 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:36.903781 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:36.906031 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:36.906056 128080 round_trippers.go:577] Response Headers:
I0124 17:46:36.906067 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:36.906076 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:36.906085 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:36.906095 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:36 GMT
I0124 17:46:36.906105 128080 round_trippers.go:580] Audit-Id: 945dcc0c-30de-44b3-9dfb-8429cc53242c
I0124 17:46:36.906114 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:36.906300 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
I0124 17:46:36.906655 128080 pod_ready.go:92] pod "kube-proxy-wxrvx" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:36.906673 128080 pod_ready.go:81] duration metric: took 95.698205ms waiting for pod "kube-proxy-wxrvx" in "kube-system" namespace to be "Ready" ...
I0124 17:46:36.906683 128080 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:37.103009 128080 request.go:622] Waited for 196.27182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-585561
I0124 17:46:37.103069 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-585561
I0124 17:46:37.103074 128080 round_trippers.go:469] Request Headers:
I0124 17:46:37.103081 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:37.103088 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:37.105313 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:37.105350 128080 round_trippers.go:577] Response Headers:
I0124 17:46:37.105362 128080 round_trippers.go:580] Audit-Id: a41ca2bc-d889-4352-b78b-1657882d4df7
I0124 17:46:37.105370 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:37.105383 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:37.105395 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:37.105406 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:37.105417 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:37 GMT
I0124 17:46:37.105524 128080 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-585561","namespace":"kube-system","uid":"99936e13-49bf-4ab3-82ea-812373f654b6","resourceVersion":"291","creationTimestamp":"2023-01-24T17:45:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9db8e3e7879313b6e801011c12e1db82","kubernetes.io/config.mirror":"9db8e3e7879313b6e801011c12e1db82","kubernetes.io/config.seen":"2023-01-24T17:45:47.607460620Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-24T17:45:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
I0124 17:46:37.303177 128080 request.go:622] Waited for 197.268104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:37.303239 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-585561
I0124 17:46:37.303243 128080 round_trippers.go:469] Request Headers:
I0124 17:46:37.303250 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:37.303256 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:37.305709 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:37.305732 128080 round_trippers.go:577] Response Headers:
I0124 17:46:37.305742 128080 round_trippers.go:580] Audit-Id: 742d673f-c898-44bc-8806-324a8af3c921
I0124 17:46:37.305749 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:37.305754 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:37.305760 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:37.305768 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:37.305780 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:37 GMT
I0124 17:46:37.305907 128080 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-24T17:45:55Z","fieldsType":"FieldsV1","fi [truncated 5370 chars]
I0124 17:46:37.306205 128080 pod_ready.go:92] pod "kube-scheduler-multinode-585561" in "kube-system" namespace has status "Ready":"True"
I0124 17:46:37.306214 128080 pod_ready.go:81] duration metric: took 399.525668ms waiting for pod "kube-scheduler-multinode-585561" in "kube-system" namespace to be "Ready" ...
I0124 17:46:37.306224 128080 pod_ready.go:38] duration metric: took 2.40052908s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0124 17:46:37.306241 128080 system_svc.go:44] waiting for kubelet service to be running ....
I0124 17:46:37.306280 128080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0124 17:46:37.316166 128080 system_svc.go:56] duration metric: took 9.916335ms WaitForService to wait for kubelet.
I0124 17:46:37.316189 128080 kubeadm.go:578] duration metric: took 2.429088674s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0124 17:46:37.316207 128080 node_conditions.go:102] verifying NodePressure condition ...
I0124 17:46:37.503635 128080 request.go:622] Waited for 187.342903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
I0124 17:46:37.503694 128080 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
I0124 17:46:37.503698 128080 round_trippers.go:469] Request Headers:
I0124 17:46:37.503706 128080 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0124 17:46:37.503712 128080 round_trippers.go:473] Accept: application/json, */*
I0124 17:46:37.506053 128080 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0124 17:46:37.506077 128080 round_trippers.go:577] Response Headers:
I0124 17:46:37.506087 128080 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 7558294c-0633-4408-99cd-53ba35984452
I0124 17:46:37.506095 128080 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 07e199ca-8eb1-4cc9-86c0-4ce512bb9301
I0124 17:46:37.506103 128080 round_trippers.go:580] Date: Tue, 24 Jan 2023 17:46:37 GMT
I0124 17:46:37.506111 128080 round_trippers.go:580] Audit-Id: 047adda2-e523-4f86-b13a-9557d89d91bb
I0124 17:46:37.506123 128080 round_trippers.go:580] Cache-Control: no-cache, private
I0124 17:46:37.506132 128080 round_trippers.go:580] Content-Type: application/json
I0124 17:46:37.506263 128080 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"482"},"items":[{"metadata":{"name":"multinode-585561","uid":"aa22c606-8184-4ae0-a736-9a8685beb87e","resourceVersion":"432","creationTimestamp":"2023-01-24T17:45:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-585561","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6b2c057f52b907b52814c670e5ac26b018123ade","minikube.k8s.io/name":"multinode-585561","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_24T17_45_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10485 chars]
I0124 17:46:37.506712 128080 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0124 17:46:37.506726 128080 node_conditions.go:123] node cpu capacity is 8
I0124 17:46:37.506737 128080 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I0124 17:46:37.506740 128080 node_conditions.go:123] node cpu capacity is 8
I0124 17:46:37.506744 128080 node_conditions.go:105] duration metric: took 190.533637ms to run NodePressure ...
I0124 17:46:37.506753 128080 start.go:226] waiting for startup goroutines ...
I0124 17:46:37.507007 128080 ssh_runner.go:195] Run: rm -f paused
I0124 17:46:37.556258 128080 start.go:538] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
I0124 17:46:37.559978 128080 out.go:177] * Done! kubectl is now configured to use "multinode-585561" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Tue 2023-01-24 17:45:29 UTC, end at Tue 2023-01-24 17:49:46 UTC. --
Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.633138741Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.633148676Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.634323356Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.634352100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.634365187Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 24 17:45:36 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:36.634374211Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.355381779Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.355410690Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.355415966Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.355574482Z" level=info msg="Loading containers: start."
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.435750441Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.470587188Z" level=info msg="Loading containers: done."
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.480577419Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.480635872Z" level=info msg="Daemon has completed initialization"
Jan 24 17:45:39 multinode-585561 systemd[1]: Started Docker Application Container Engine.
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.498943247Z" level=info msg="API listen on [::]:2376"
Jan 24 17:45:39 multinode-585561 dockerd[1283]: time="2023-01-24T17:45:39.502639549Z" level=info msg="API listen on /var/run/docker.sock"
Jan 24 17:46:12 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:12.460746688Z" level=info msg="ignoring event" container=37e8478501e933582413e297c1e673f6918d6429a97123fb0b07ce4732e4c936 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 24 17:46:12 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:12.577129124Z" level=info msg="ignoring event" container=e50d011e65c7e1f08685495eb187d579f1ae10e39b6888c309fbb45306a4c6cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 24 17:46:13 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:13.011027304Z" level=info msg="ignoring event" container=19a971b91106d9708930bbfeba83bc98aab4cc9036d303d4f9f85d0e9193d087 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 24 17:46:14 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:14.352201402Z" level=info msg="ignoring event" container=5a2c5d625a14975e0548f8542d064947537e1d3d93b966e96273b61c7d512044 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 24 17:46:15 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:15.057917460Z" level=info msg="ignoring event" container=2d694add5bf5369c664ffa57535f4fd192341b356eb4489ace3841139b339b6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 24 17:46:16 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:16.200600114Z" level=info msg="ignoring event" container=937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 24 17:46:17 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:17.234437482Z" level=info msg="ignoring event" container=e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 24 17:46:18 multinode-585561 dockerd[1283]: time="2023-01-24T17:46:18.254551352Z" level=info msg="ignoring event" container=2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
fb3da35f45bf5 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 3 minutes ago Running busybox 0 124081a927014
3e1eaf7c0054a 5185b96f0becf 3 minutes ago Running coredns 0 47a63c2278997
28cc12c3f1288 kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe 3 minutes ago Running kindnet-cni 0 9aef0e100458e
1f47880f5c352 6e38f40d628db 3 minutes ago Running storage-provisioner 0 99236eb3b001f
7e5eddf7c5d55 46a6bb3c77ce0 3 minutes ago Running kube-proxy 0 747975b188cac
8d7a8a4801df0 e9c08e11b07f6 3 minutes ago Running kube-controller-manager 0 ff9340f5d9bcd
a8a00c2b5f80f fce326961ae2d 3 minutes ago Running etcd 0 d7ec06dc1a21d
8db5094d208be deb04688c4a35 3 minutes ago Running kube-apiserver 0 4fdcbd8bc5041
8af55922f6ee3 655493523f607 3 minutes ago Running kube-scheduler 0 8841f3ddae517
*
* ==> coredns [3e1eaf7c0054] <==
* [INFO] 10.244.0.3:46987 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115946s
[INFO] 10.244.1.2:33571 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152422s
[INFO] 10.244.1.2:48583 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001909051s
[INFO] 10.244.1.2:38742 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115129s
[INFO] 10.244.1.2:59507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086546s
[INFO] 10.244.1.2:32834 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001506643s
[INFO] 10.244.1.2:42786 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069047s
[INFO] 10.244.1.2:46959 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090386s
[INFO] 10.244.1.2:54809 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074209s
[INFO] 10.244.0.3:52431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127147s
[INFO] 10.244.0.3:59453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123092s
[INFO] 10.244.0.3:54130 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104928s
[INFO] 10.244.0.3:50539 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097535s
[INFO] 10.244.1.2:58908 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133496s
[INFO] 10.244.1.2:55440 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114378s
[INFO] 10.244.1.2:57653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117943s
[INFO] 10.244.1.2:50356 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122998s
[INFO] 10.244.0.3:44564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136332s
[INFO] 10.244.0.3:48809 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00012631s
[INFO] 10.244.0.3:34982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215444s
[INFO] 10.244.0.3:48866 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00012193s
[INFO] 10.244.1.2:45388 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159094s
[INFO] 10.244.1.2:32868 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130065s
[INFO] 10.244.1.2:52927 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093822s
[INFO] 10.244.1.2:44825 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068795s
*
* ==> describe nodes <==
* Name: multinode-585561
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-585561
kubernetes.io/os=linux
minikube.k8s.io/commit=6b2c057f52b907b52814c670e5ac26b018123ade
minikube.k8s.io/name=multinode-585561
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_24T17_45_58_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 24 Jan 2023 17:45:55 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-585561
AcquireTime: <unset>
RenewTime: Tue, 24 Jan 2023 17:49:43 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 24 Jan 2023 17:46:59 +0000 Tue, 24 Jan 2023 17:45:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 24 Jan 2023 17:46:59 +0000 Tue, 24 Jan 2023 17:45:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 24 Jan 2023 17:46:59 +0000 Tue, 24 Jan 2023 17:45:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 24 Jan 2023 17:46:59 +0000 Tue, 24 Jan 2023 17:46:08 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.58.2
Hostname: multinode-585561
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
System Info:
Machine ID: 11af74b3a18d4d7295d17813eccf6dd7
System UUID: 603f4c0b-41f8-4d3d-9b3f-d4e2b09a393b
Boot ID: 202c095e-d1d4-4b92-9c9d-a08c9f26c94d
Kernel Version: 5.15.0-1027-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.22
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-7rp7j 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m8s
kube-system coredns-787d4945fb-lfdwf 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 3m35s
kube-system etcd-multinode-585561 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 3m48s
kube-system kindnet-4zggw 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 3m36s
kube-system kube-apiserver-multinode-585561 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m48s
kube-system kube-controller-manager-multinode-585561 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m49s
kube-system kube-proxy-wxrvx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m36s
kube-system kube-scheduler-multinode-585561 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m50s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m34s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%!)(MISSING) 100m (1%!)(MISSING)
memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m35s kube-proxy
Normal Starting 3m48s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m48s kubelet Node multinode-585561 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m48s kubelet Node multinode-585561 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m48s kubelet Node multinode-585561 status is now: NodeHasSufficientPID
Normal NodeNotReady 3m48s kubelet Node multinode-585561 status is now: NodeNotReady
Normal NodeAllocatableEnforced 3m48s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m38s kubelet Node multinode-585561 status is now: NodeReady
Normal RegisteredNode 3m36s node-controller Node multinode-585561 event: Registered Node multinode-585561 in Controller
Name: multinode-585561-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-585561-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 24 Jan 2023 17:46:33 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-585561-m02
AcquireTime: <unset>
RenewTime: Tue, 24 Jan 2023 17:49:37 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 24 Jan 2023 17:47:04 +0000 Tue, 24 Jan 2023 17:46:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 24 Jan 2023 17:47:04 +0000 Tue, 24 Jan 2023 17:46:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 24 Jan 2023 17:47:04 +0000 Tue, 24 Jan 2023 17:46:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 24 Jan 2023 17:47:04 +0000 Tue, 24 Jan 2023 17:46:34 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.58.3
Hostname: multinode-585561-m02
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
System Info:
Machine ID: 11af74b3a18d4d7295d17813eccf6dd7
System UUID: 29f7dbff-788f-4a96-9540-e33f700d45ce
Boot ID: 202c095e-d1d4-4b92-9c9d-a08c9f26c94d
Kernel Version: 5.15.0-1027-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.22
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-6b86dd6d48-c86kc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m8s
kube-system kindnet-j5zlg 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 3m13s
kube-system kube-proxy-txqvw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m13s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%!)(MISSING) 100m (1%!)(MISSING)
memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 3m10s kube-proxy
Normal Starting 3m13s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m13s (x2 over 3m13s) kubelet Node multinode-585561-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m13s (x2 over 3m13s) kubelet Node multinode-585561-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m13s (x2 over 3m13s) kubelet Node multinode-585561-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m13s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 3m12s kubelet Node multinode-585561-m02 status is now: NodeReady
Normal RegisteredNode 3m11s node-controller Node multinode-585561-m02 event: Registered Node multinode-585561-m02 in Controller
Name: multinode-585561-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-585561-m03
kubernetes.io/os=linux
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 24 Jan 2023 17:47:26 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-585561-m03
AcquireTime: <unset>
RenewTime: Tue, 24 Jan 2023 17:49:38 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 24 Jan 2023 17:47:36 +0000 Tue, 24 Jan 2023 17:47:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 24 Jan 2023 17:47:36 +0000 Tue, 24 Jan 2023 17:47:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 24 Jan 2023 17:47:36 +0000 Tue, 24 Jan 2023 17:47:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 24 Jan 2023 17:47:36 +0000 Tue, 24 Jan 2023 17:47:26 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.58.4
Hostname: multinode-585561-m03
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
System Info:
Machine ID: 11af74b3a18d4d7295d17813eccf6dd7
System UUID: 01ee951f-caa0-4cd4-aba5-a87993504d5a
Boot ID: 202c095e-d1d4-4b92-9c9d-a08c9f26c94d
Kernel Version: 5.15.0-1027-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.22
Kubelet Version: v1.26.1
Kube-Proxy Version: v1.26.1
PodCIDR: 10.244.3.0/24
PodCIDRs: 10.244.3.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-hscwc 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 2m47s
kube-system kube-proxy-z965l 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m47s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%!)(MISSING) 100m (1%!)(MISSING)
memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m44s kube-proxy
Normal Starting 2m6s kube-proxy
Normal NodeHasSufficientPID 2m47s (x2 over 2m47s) kubelet Node multinode-585561-m03 status is now: NodeHasSufficientPID
Normal Starting 2m47s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m47s kubelet Updated Node Allocatable limit across pods
Normal NodeHasNoDiskPressure 2m47s (x2 over 2m47s) kubelet Node multinode-585561-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m47s (x2 over 2m47s) kubelet Node multinode-585561-m03 status is now: NodeHasSufficientMemory
Normal NodeReady 2m46s kubelet Node multinode-585561-m03 status is now: NodeReady
Normal Starting 2m27s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m27s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m20s (x7 over 2m27s) kubelet Node multinode-585561-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m20s (x7 over 2m27s) kubelet Node multinode-585561-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m20s (x7 over 2m27s) kubelet Node multinode-585561-m03 status is now: NodeHasSufficientPID
*
* ==> dmesg <==
* [ +0.007965] FS-Cache: N-cookie d=00000000479796fa{9p.inode} n=000000003d13ee1d
[ +0.008725] FS-Cache: N-key=[8] '89a00f0200000000'
[ +3.146479] FS-Cache: Duplicate cookie detected
[ +0.004688] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
[ +0.006752] FS-Cache: O-cookie d=00000000479796fa{9p.inode} n=000000000a33edc1
[ +0.008071] FS-Cache: O-key=[8] '88a00f0200000000'
[ +0.004938] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.006603] FS-Cache: N-cookie d=00000000479796fa{9p.inode} n=000000001a93f6e1
[ +0.007465] FS-Cache: N-key=[8] '88a00f0200000000'
[ +0.404647] FS-Cache: Duplicate cookie detected
[ +0.004698] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006786] FS-Cache: O-cookie d=00000000479796fa{9p.inode} n=000000008a43e3d4
[ +0.007636] FS-Cache: O-key=[8] '99a00f0200000000'
[ +0.004975] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.006629] FS-Cache: N-cookie d=00000000479796fa{9p.inode} n=00000000846a6a20
[ +0.008738] FS-Cache: N-key=[8] '99a00f0200000000'
[Jan24 17:36] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 7b ab a7 58 38 08 06
[Jan24 17:37] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jan24 17:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e ce 8c ad 7a 7e 08 06
[ +0.130814] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
[ +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 69 bd a0 78 14 08 06
[Jan24 17:44] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
[ +0.000009] ll header: 00000000: ff ff ff ff ff ff f6 15 94 bf f7 0e 08 06
*
* ==> etcd [a8a00c2b5f80] <==
* {"level":"info","ts":"2023-01-24T17:45:52.837Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
{"level":"info","ts":"2023-01-24T17:45:52.839Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
{"level":"info","ts":"2023-01-24T17:45:53.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
{"level":"info","ts":"2023-01-24T17:45:53.755Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-24T17:45:53.755Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-585561 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-24T17:45:53.755Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-24T17:45:53.755Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-24T17:45:53.756Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-24T17:45:53.757Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
{"level":"info","ts":"2023-01-24T17:45:53.757Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-24T17:46:26.858Z","caller":"traceutil/trace.go:171","msg":"trace[1820907400] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"124.887031ms","start":"2023-01-24T17:46:26.734Z","end":"2023-01-24T17:46:26.858Z","steps":["trace[1820907400] 'process raft request' (duration: 60.351066ms)","trace[1820907400] 'compare' (duration: 64.417937ms)"],"step_count":2}
*
* ==> kernel <==
* 17:49:46 up 32 min, 0 users, load average: 0.26, 0.88, 0.90
Linux multinode-585561 5.15.0-1027-gcp #34~20.04.1-Ubuntu SMP Mon Jan 9 18:40:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [8db5094d208b] <==
* I0124 17:45:55.258392 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0124 17:45:55.258402 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0124 17:45:55.258922 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0124 17:45:55.259321 1 shared_informer.go:280] Caches are synced for configmaps
I0124 17:45:55.259622 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0124 17:45:55.259640 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0124 17:45:55.261852 1 controller.go:615] quota admission added evaluator for: namespaces
I0124 17:45:55.278929 1 shared_informer.go:280] Caches are synced for node_authorizer
I0124 17:45:55.280628 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0124 17:45:55.945704 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0124 17:45:56.162820 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0124 17:45:56.166758 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0124 17:45:56.166774 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0124 17:45:56.558863 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0124 17:45:56.593191 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0124 17:45:56.694499 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0124 17:45:56.702819 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
I0124 17:45:56.703724 1 controller.go:615] quota admission added evaluator for: endpoints
I0124 17:45:56.707696 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0124 17:45:57.189040 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0124 17:45:57.989573 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0124 17:45:58.001116 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0124 17:45:58.008285 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0124 17:46:10.496719 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
I0124 17:46:10.899399 1 controller.go:615] quota admission added evaluator for: replicasets.apps
*
* ==> kube-controller-manager [8d7a8a4801df] <==
* I0124 17:46:11.056003 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-lfdwf"
I0124 17:46:11.276011 1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
I0124 17:46:11.287227 1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-5748b"
W0124 17:46:33.909489 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-585561-m02" does not exist
I0124 17:46:33.916264 1 range_allocator.go:372] Set node multinode-585561-m02 PodCIDR to [10.244.1.0/24]
I0124 17:46:33.919965 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j5zlg"
I0124 17:46:33.922734 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-txqvw"
W0124 17:46:34.524624 1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
W0124 17:46:35.246003 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-585561-m02. Assuming now as a timestamp.
I0124 17:46:35.246031 1 event.go:294] "Event occurred" object="multinode-585561-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-585561-m02 event: Registered Node multinode-585561-m02 in Controller"
I0124 17:46:38.426356 1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
I0124 17:46:38.434633 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-c86kc"
I0124 17:46:38.440168 1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-7rp7j"
W0124 17:46:59.700288 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-585561-m03" does not exist
W0124 17:46:59.700335 1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
I0124 17:46:59.710975 1 range_allocator.go:372] Set node multinode-585561-m03 PodCIDR to [10.244.2.0/24]
I0124 17:46:59.711802 1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hscwc"
I0124 17:46:59.711827 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z965l"
W0124 17:47:00.250029 1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-585561-m03. Assuming now as a timestamp.
I0124 17:47:00.250048 1 event.go:294] "Event occurred" object="multinode-585561-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-585561-m03 event: Registered Node multinode-585561-m03 in Controller"
W0124 17:47:00.315034 1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
W0124 17:47:26.171868 1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
W0124 17:47:26.217085 1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-585561-m03" does not exist
W0124 17:47:26.217158 1 topologycache.go:232] Can't get CPU or zone information for multinode-585561-m02 node
I0124 17:47:26.224446 1 range_allocator.go:372] Set node multinode-585561-m03 PodCIDR to [10.244.3.0/24]
*
* ==> kube-proxy [7e5eddf7c5d5] <==
* I0124 17:46:11.137588 1 node.go:163] Successfully retrieved node IP: 192.168.58.2
I0124 17:46:11.137784 1 server_others.go:109] "Detected node IP" address="192.168.58.2"
I0124 17:46:11.138018 1 server_others.go:535] "Using iptables proxy"
I0124 17:46:11.161726 1 server_others.go:176] "Using iptables Proxier"
I0124 17:46:11.161766 1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0124 17:46:11.161776 1 server_others.go:184] "Creating dualStackProxier for iptables"
I0124 17:46:11.161801 1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0124 17:46:11.161835 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0124 17:46:11.162766 1 server.go:655] "Version info" version="v1.26.1"
I0124 17:46:11.162785 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0124 17:46:11.163311 1 config.go:317] "Starting service config controller"
I0124 17:46:11.163332 1 shared_informer.go:273] Waiting for caches to sync for service config
I0124 17:46:11.163349 1 config.go:226] "Starting endpoint slice config controller"
I0124 17:46:11.163353 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0124 17:46:11.163767 1 config.go:444] "Starting node config controller"
I0124 17:46:11.164106 1 shared_informer.go:273] Waiting for caches to sync for node config
I0124 17:46:11.264221 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0124 17:46:11.264284 1 shared_informer.go:280] Caches are synced for service config
I0124 17:46:11.264708 1 shared_informer.go:280] Caches are synced for node config
*
* ==> kube-scheduler [8af55922f6ee] <==
* W0124 17:45:55.253090 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0124 17:45:55.253427 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0124 17:45:55.253431 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0124 17:45:55.253446 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0124 17:45:55.253431 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0124 17:45:55.253458 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0124 17:45:55.253460 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0124 17:45:55.253185 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0124 17:45:55.253475 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0124 17:45:55.253477 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0124 17:45:55.253336 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0124 17:45:55.253508 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0124 17:45:55.253363 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0124 17:45:55.253528 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0124 17:45:55.253101 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0124 17:45:55.253547 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0124 17:45:55.253273 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0124 17:45:55.253563 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0124 17:45:56.158168 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0124 17:45:56.158207 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0124 17:45:56.406846 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0124 17:45:56.406880 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0124 17:45:56.415884 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0124 17:45:56.415914 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0124 17:45:59.352147 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Tue 2023-01-24 17:45:29 UTC, end at Tue 2023-01-24 17:49:47 UTC. --
Jan 24 17:46:15 multinode-585561 kubelet[2886]: I0124 17:46:15.958660 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d694add5bf5369c664ffa57535f4fd192341b356eb4489ace3841139b339b6f"
Jan 24 17:46:16 multinode-585561 kubelet[2886]: I0124 17:46:16.164014 2886 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=eec968db-c6da-4e2a-a20f-de7ed82a64cf path="/var/lib/kubelet/pods/eec968db-c6da-4e2a-a20f-de7ed82a64cf/volumes"
Jan 24 17:46:16 multinode-585561 kubelet[2886]: E0124 17:46:16.229855 2886 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\""
Jan 24 17:46:16 multinode-585561 kubelet[2886]: E0124 17:46:16.229926 2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
Jan 24 17:46:16 multinode-585561 kubelet[2886]: E0124 17:46:16.229950 2886 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
Jan 24 17:46:16 multinode-585561 kubelet[2886]: E0124 17:46:16.230012 2886 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14\\\" network for pod \\\"coredns-787d4945fb-lfdwf\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-lfdwf_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-lfdwf" podUID=3ad6d110-548d-4cec-bae8-945a1e7d7853
Jan 24 17:46:16 multinode-585561 kubelet[2886]: I0124 17:46:16.976003 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="937eacd2792177a43b5b7b37631dc6f371a16d1605185ceb6a64d0c79c324a14"
Jan 24 17:46:17 multinode-585561 kubelet[2886]: E0124 17:46:17.263039 2886 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\""
Jan 24 17:46:17 multinode-585561 kubelet[2886]: E0124 17:46:17.263103 2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
Jan 24 17:46:17 multinode-585561 kubelet[2886]: E0124 17:46:17.263125 2886 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
Jan 24 17:46:17 multinode-585561 kubelet[2886]: E0124 17:46:17.263189 2886 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904\\\" network for pod \\\"coredns-787d4945fb-lfdwf\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-lfdwf_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-lfdwf" podUID=3ad6d110-548d-4cec-bae8-945a1e7d7853
Jan 24 17:46:17 multinode-585561 kubelet[2886]: I0124 17:46:17.991309 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2b54e1f83ad097f09e72a940c663a5ac96e9961a6b5ae1b241ade9931577904"
Jan 24 17:46:18 multinode-585561 kubelet[2886]: E0124 17:46:18.284372 2886 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\""
Jan 24 17:46:18 multinode-585561 kubelet[2886]: E0124 17:46:18.284448 2886 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to set up sandbox container \"2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
Jan 24 17:46:18 multinode-585561 kubelet[2886]: E0124 17:46:18.284473 2886 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to set up sandbox container \"2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c\" network for pod \"coredns-787d4945fb-lfdwf\": networkPlugin cni failed to set up pod \"coredns-787d4945fb-lfdwf_kube-system\" network: unsupported CNI result version \"1.0.0\"" pod="kube-system/coredns-787d4945fb-lfdwf"
Jan 24 17:46:18 multinode-585561 kubelet[2886]: E0124 17:46:18.284608 2886 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-lfdwf_kube-system(3ad6d110-548d-4cec-bae8-945a1e7d7853)\\\": rpc error: code = Unknown desc = failed to set up sandbox container \\\"2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c\\\" network for pod \\\"coredns-787d4945fb-lfdwf\\\": networkPlugin cni failed to set up pod \\\"coredns-787d4945fb-lfdwf_kube-system\\\" network: unsupported CNI result version \\\"1.0.0\\\"\"" pod="kube-system/coredns-787d4945fb-lfdwf" podUID=3ad6d110-548d-4cec-bae8-945a1e7d7853
Jan 24 17:46:18 multinode-585561 kubelet[2886]: I0124 17:46:18.650642 2886 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jan 24 17:46:18 multinode-585561 kubelet[2886]: I0124 17:46:18.651245 2886 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jan 24 17:46:19 multinode-585561 kubelet[2886]: I0124 17:46:19.005307 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae89efb3debd6320332c3a114e0ab20f4cabfa15e95d644cfdd6ce0f42f1c8c"
Jan 24 17:46:20 multinode-585561 kubelet[2886]: I0124 17:46:20.037141 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4zggw" podStartSLOduration=-9.223372026817673e+09 pod.CreationTimestamp="2023-01-24 17:46:10 +0000 UTC" firstStartedPulling="2023-01-24 17:46:11.745469717 +0000 UTC m=+13.777976029" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-24 17:46:15.975027099 +0000 UTC m=+18.007533417" watchObservedRunningTime="2023-01-24 17:46:20.037103988 +0000 UTC m=+22.069610305"
Jan 24 17:46:20 multinode-585561 kubelet[2886]: I0124 17:46:20.037338 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-lfdwf" podStartSLOduration=9.037301283 pod.CreationTimestamp="2023-01-24 17:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-24 17:46:20.036544143 +0000 UTC m=+22.069050461" watchObservedRunningTime="2023-01-24 17:46:20.037301283 +0000 UTC m=+22.069807600"
Jan 24 17:46:38 multinode-585561 kubelet[2886]: I0124 17:46:38.447996 2886 topology_manager.go:210] "Topology Admit Handler"
Jan 24 17:46:38 multinode-585561 kubelet[2886]: I0124 17:46:38.531385 2886 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tddk\" (UniqueName: \"kubernetes.io/projected/26cc4840-317c-472e-99bb-629a04d62105-kube-api-access-6tddk\") pod \"busybox-6b86dd6d48-7rp7j\" (UID: \"26cc4840-317c-472e-99bb-629a04d62105\") " pod="default/busybox-6b86dd6d48-7rp7j"
Jan 24 17:46:39 multinode-585561 kubelet[2886]: I0124 17:46:39.192554 2886 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124081a92701436a43375e12a1c1bb790b0520e24efaf1d970e27208432e8dae"
Jan 24 17:46:41 multinode-585561 kubelet[2886]: I0124 17:46:41.225686 2886 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-6b86dd6d48-7rp7j" podStartSLOduration=-9.223372033629145e+09 pod.CreationTimestamp="2023-01-24 17:46:38 +0000 UTC" firstStartedPulling="2023-01-24 17:46:39.21404037 +0000 UTC m=+41.246546687" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-24 17:46:41.225554296 +0000 UTC m=+43.258060614" watchObservedRunningTime="2023-01-24 17:46:41.225631061 +0000 UTC m=+43.258137379"
*
* ==> storage-provisioner [1f47880f5c35] <==
* I0124 17:46:13.225870 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0124 17:46:13.232899 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0124 17:46:13.232952 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0124 17:46:13.240949 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0124 17:46:13.241098 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-585561_c64320d4-82d1-43e3-ada2-aae090f594fc!
I0124 17:46:13.241061 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0ca7298-edcb-4d68-ade8-a30e9888ab9a", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-585561_c64320d4-82d1-43e3-ada2-aae090f594fc became leader
I0124 17:46:13.341876 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-585561_c64320d4-82d1-43e3-ada2-aae090f594fc!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-585561 -n multinode-585561
helpers_test.go:261: (dbg) Run: kubectl --context multinode-585561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (149.19s)