=== RUN TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run: out/minikube-linux-amd64 -p multinode-944570 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-944570 node start m03 --alsologtostderr: exit status 90 (18.048886857s)
-- stdout --
* Starting worker node multinode-944570-m03 in cluster multinode-944570
* Restarting existing kvm2 VM for "multinode-944570-m03" ...
-- /stdout --
** stderr **
I0830 20:28:06.573562 244307 out.go:296] Setting OutFile to fd 1 ...
I0830 20:28:06.573729 244307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:28:06.573739 244307 out.go:309] Setting ErrFile to fd 2...
I0830 20:28:06.573743 244307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:28:06.573939 244307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:28:06.574189 244307 mustload.go:65] Loading cluster: multinode-944570
I0830 20:28:06.575608 244307 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:28:06.576245 244307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:28:06.576305 244307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:28:06.591317 244307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
I0830 20:28:06.591776 244307 main.go:141] libmachine: () Calling .GetVersion
I0830 20:28:06.592489 244307 main.go:141] libmachine: Using API Version 1
I0830 20:28:06.592514 244307 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:28:06.592877 244307 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:28:06.593092 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
W0830 20:28:06.594543 244307 host.go:58] "multinode-944570-m03" host status: Stopped
I0830 20:28:06.596855 244307 out.go:177] * Starting worker node multinode-944570-m03 in cluster multinode-944570
I0830 20:28:06.598212 244307 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0830 20:28:06.598256 244307 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
I0830 20:28:06.598266 244307 cache.go:57] Caching tarball of preloaded images
I0830 20:28:06.598363 244307 preload.go:174] Found /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0830 20:28:06.598375 244307 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on docker
I0830 20:28:06.598508 244307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:28:06.598707 244307 start.go:365] acquiring machines lock for multinode-944570-m03: {Name:mk9a092bb7d2f42c1b785aa1d546d37ad26cec77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0830 20:28:06.598757 244307 start.go:369] acquired machines lock for "multinode-944570-m03" in 23.357µs
I0830 20:28:06.598772 244307 start.go:96] Skipping create...Using existing machine configuration
I0830 20:28:06.598776 244307 fix.go:54] fixHost starting: m03
I0830 20:28:06.599038 244307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:28:06.599071 244307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:28:06.614301 244307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
I0830 20:28:06.614773 244307 main.go:141] libmachine: () Calling .GetVersion
I0830 20:28:06.615382 244307 main.go:141] libmachine: Using API Version 1
I0830 20:28:06.615403 244307 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:28:06.615702 244307 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:28:06.615920 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:06.616060 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
I0830 20:28:06.617687 244307 fix.go:102] recreateIfNeeded on multinode-944570-m03: state=Stopped err=<nil>
I0830 20:28:06.617717 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
W0830 20:28:06.617886 244307 fix.go:128] unexpected machine state, will restart: <nil>
I0830 20:28:06.619821 244307 out.go:177] * Restarting existing kvm2 VM for "multinode-944570-m03" ...
I0830 20:28:06.621392 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .Start
I0830 20:28:06.621583 244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring networks are active...
I0830 20:28:06.622306 244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring network default is active
I0830 20:28:06.622618 244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring network mk-multinode-944570 is active
I0830 20:28:06.622938 244307 main.go:141] libmachine: (multinode-944570-m03) Getting domain xml...
I0830 20:28:06.623618 244307 main.go:141] libmachine: (multinode-944570-m03) Creating domain...
I0830 20:28:07.885177 244307 main.go:141] libmachine: (multinode-944570-m03) Waiting to get IP...
I0830 20:28:07.886080 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.886564 244307 main.go:141] libmachine: (multinode-944570-m03) Found IP for machine: 192.168.39.83
I0830 20:28:07.886598 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has current primary IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.886610 244307 main.go:141] libmachine: (multinode-944570-m03) Reserving static IP address...
I0830 20:28:07.887023 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "multinode-944570-m03", mac: "52:54:00:21:38:ac", ip: "192.168.39.83"} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:27:24 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:07.887056 244307 main.go:141] libmachine: (multinode-944570-m03) Reserved static IP address: 192.168.39.83
I0830 20:28:07.887076 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | skip adding static IP to network mk-multinode-944570 - found existing host DHCP lease matching {name: "multinode-944570-m03", mac: "52:54:00:21:38:ac", ip: "192.168.39.83"}
I0830 20:28:07.887096 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Getting to WaitForSSH function...
I0830 20:28:07.887114 244307 main.go:141] libmachine: (multinode-944570-m03) Waiting for SSH to be available...
I0830 20:28:07.889355 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.889760 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:27:24 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:07.889805 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.889875 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Using SSH client type: external
I0830 20:28:07.889913 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa (-rw-------)
I0830 20:28:07.889955 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0830 20:28:07.889971 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | About to run SSH command:
I0830 20:28:07.889986 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | exit 0
I0830 20:28:19.990768 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | SSH cmd err, output: <nil>:
I0830 20:28:19.991228 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetConfigRaw
I0830 20:28:19.992027 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:19.994736 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.995178 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:19.995218 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.995566 244307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:28:19.995804 244307 machine.go:88] provisioning docker machine ...
I0830 20:28:19.995826 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:19.996062 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:19.996233 244307 buildroot.go:166] provisioning hostname "multinode-944570-m03"
I0830 20:28:19.996251 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:19.996393 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:19.998799 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.999129 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:19.999158 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.999322 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:19.999531 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:19.999724 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:19.999869 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.000039 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.000672 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.000697 244307 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-944570-m03 && echo "multinode-944570-m03" | sudo tee /etc/hostname
I0830 20:28:20.138639 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-944570-m03
I0830 20:28:20.138679 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.141577 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.142086 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.142129 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.142250 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.142466 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.142639 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.142749 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.142907 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.143334 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.143352 244307 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-944570-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-944570-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-944570-m03' | sudo tee -a /etc/hosts;
fi
fi
I0830 20:28:20.266328 244307 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0830 20:28:20.266356 244307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17145-222139/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-222139/.minikube}
I0830 20:28:20.266393 244307 buildroot.go:174] setting up certificates
I0830 20:28:20.266406 244307 provision.go:83] configureAuth start
I0830 20:28:20.266420 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:20.266734 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:20.269497 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.269864 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.269904 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.270090 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.272135 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.272553 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.272582 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.272708 244307 provision.go:138] copyHostCerts
I0830 20:28:20.272767 244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem, removing ...
I0830 20:28:20.272777 244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
I0830 20:28:20.272844 244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem (1123 bytes)
I0830 20:28:20.272966 244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem, removing ...
I0830 20:28:20.272976 244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
I0830 20:28:20.273002 244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem (1675 bytes)
I0830 20:28:20.273067 244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem, removing ...
I0830 20:28:20.273074 244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
I0830 20:28:20.273094 244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem (1082 bytes)
I0830 20:28:20.273172 244307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem org=jenkins.multinode-944570-m03 san=[192.168.39.83 192.168.39.83 localhost 127.0.0.1 minikube multinode-944570-m03]
I0830 20:28:20.393764 244307 provision.go:172] copyRemoteCerts
I0830 20:28:20.393820 244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0830 20:28:20.393844 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.396496 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.396831 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.396864 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.397040 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.397257 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.397412 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.397568 244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:20.484425 244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0830 20:28:20.505011 244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0830 20:28:20.525576 244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0830 20:28:20.545797 244307 provision.go:86] duration metric: configureAuth took 279.365155ms
I0830 20:28:20.545834 244307 buildroot.go:189] setting minikube options for container-runtime
I0830 20:28:20.546069 244307 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:28:20.546094 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:20.546398 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.549013 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.549347 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.549377 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.549558 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.549744 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.549908 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.550025 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.550201 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.550580 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.550592 244307 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0830 20:28:20.669126 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0830 20:28:20.669159 244307 buildroot.go:70] root file system type: tmpfs
I0830 20:28:20.669312 244307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0830 20:28:20.669338 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.671868 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.672232 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.672257 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.672449 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.672640 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.672815 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.672955 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.673120 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.673795 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.673892 244307 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0830 20:28:20.799169 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0830 20:28:20.799231 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.802123 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.802501 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.802530 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.802699 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.802869 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.803010 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.803149 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.803394 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.803892 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.803918 244307 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0830 20:28:21.578444 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0830 20:28:21.578469 244307 machine.go:91] provisioned docker machine in 1.582651123s
I0830 20:28:21.578480 244307 start.go:300] post-start starting for "multinode-944570-m03" (driver="kvm2")
I0830 20:28:21.578490 244307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0830 20:28:21.578511 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.578900 244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0830 20:28:21.578942 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.581578 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.581969 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.581997 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.582131 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.582369 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.582565 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.582749 244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.668786 244307 ssh_runner.go:195] Run: cat /etc/os-release
I0830 20:28:21.672898 244307 info.go:137] Remote host: Buildroot 2021.02.12
I0830 20:28:21.672928 244307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/addons for local assets ...
I0830 20:28:21.673010 244307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/files for local assets ...
I0830 20:28:21.673094 244307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> 2293472.pem in /etc/ssl/certs
I0830 20:28:21.673181 244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0830 20:28:21.682570 244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /etc/ssl/certs/2293472.pem (1708 bytes)
I0830 20:28:21.702791 244307 start.go:303] post-start completed in 124.296018ms
I0830 20:28:21.702818 244307 fix.go:56] fixHost completed within 15.104040753s
I0830 20:28:21.702845 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.705614 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.706051 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.706103 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.706277 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.706472 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.706649 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.706796 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.706949 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:21.707369 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:21.707382 244307 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0830 20:28:21.819938 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693427301.771008702
I0830 20:28:21.819966 244307 fix.go:206] guest clock: 1693427301.771008702
I0830 20:28:21.819973 244307 fix.go:219] Guest: 2023-08-30 20:28:21.771008702 +0000 UTC Remote: 2023-08-30 20:28:21.702822945 +0000 UTC m=+15.165600981 (delta=68.185757ms)
I0830 20:28:21.819993 244307 fix.go:190] guest clock delta is within tolerance: 68.185757ms
I0830 20:28:21.819998 244307 start.go:83] releasing machines lock for "multinode-944570-m03", held for 15.221231305s
I0830 20:28:21.820019 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.820357 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:21.823024 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.823407 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.823431 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.823638 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824224 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824406 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824518 244307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0830 20:28:21.824558 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.824642 244307 ssh_runner.go:195] Run: systemctl --version
I0830 20:28:21.824671 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.827280 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827583 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827738 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.827775 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827921 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.828041 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.828081 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.828087 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.828217 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.828321 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.828348 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.828494 244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.828560 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.828679 244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.961459 244307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0830 20:28:21.966821 244307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0830 20:28:21.966920 244307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0830 20:28:21.981519 244307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0830 20:28:21.981544 244307 start.go:466] detecting cgroup driver to use...
I0830 20:28:21.981698 244307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:28:21.998451 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0830 20:28:22.007411 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0830 20:28:22.016484 244307 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0830 20:28:22.016544 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0830 20:28:22.025752 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:28:22.034759 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0830 20:28:22.043923 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:28:22.052964 244307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0830 20:28:22.062283 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0830 20:28:22.071333 244307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0830 20:28:22.079597 244307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0830 20:28:22.087564 244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:22.189186 244307 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0830 20:28:22.206890 244307 start.go:466] detecting cgroup driver to use...
I0830 20:28:22.206994 244307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0830 20:28:22.220310 244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:28:22.231888 244307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0830 20:28:22.247009 244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:28:22.258664 244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:28:22.269656 244307 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0830 20:28:22.300955 244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:28:22.312769 244307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:28:22.329383 244307 ssh_runner.go:195] Run: which cri-dockerd
I0830 20:28:22.332782 244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0830 20:28:22.340908 244307 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0830 20:28:22.354724 244307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0830 20:28:22.470530 244307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0830 20:28:22.573526 244307 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0830 20:28:22.573569 244307 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0830 20:28:22.590701 244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:22.696373 244307 ssh_runner.go:195] Run: sudo systemctl restart docker
I0830 20:28:24.102132 244307 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405720981s)
I0830 20:28:24.102211 244307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:28:24.213758 244307 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0830 20:28:24.331361 244307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:28:24.437979 244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:24.557719 244307 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0830 20:28:24.572334 244307 out.go:177]
W0830 20:28:24.573820 244307 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0830 20:28:24.573836 244307 out.go:239] *
*
W0830 20:28:24.576192 244307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0830 20:28:24.577631 244307 out.go:177]
** /stderr **
multinode_test.go:256: I0830 20:28:06.573562 244307 out.go:296] Setting OutFile to fd 1 ...
I0830 20:28:06.573729 244307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:28:06.573739 244307 out.go:309] Setting ErrFile to fd 2...
I0830 20:28:06.573743 244307 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:28:06.573939 244307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:28:06.574189 244307 mustload.go:65] Loading cluster: multinode-944570
I0830 20:28:06.575608 244307 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:28:06.576245 244307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:28:06.576305 244307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:28:06.591317 244307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
I0830 20:28:06.591776 244307 main.go:141] libmachine: () Calling .GetVersion
I0830 20:28:06.592489 244307 main.go:141] libmachine: Using API Version 1
I0830 20:28:06.592514 244307 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:28:06.592877 244307 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:28:06.593092 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
W0830 20:28:06.594543 244307 host.go:58] "multinode-944570-m03" host status: Stopped
I0830 20:28:06.596855 244307 out.go:177] * Starting worker node multinode-944570-m03 in cluster multinode-944570
I0830 20:28:06.598212 244307 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0830 20:28:06.598256 244307 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
I0830 20:28:06.598266 244307 cache.go:57] Caching tarball of preloaded images
I0830 20:28:06.598363 244307 preload.go:174] Found /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0830 20:28:06.598375 244307 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on docker
I0830 20:28:06.598508 244307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:28:06.598707 244307 start.go:365] acquiring machines lock for multinode-944570-m03: {Name:mk9a092bb7d2f42c1b785aa1d546d37ad26cec77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0830 20:28:06.598757 244307 start.go:369] acquired machines lock for "multinode-944570-m03" in 23.357µs
I0830 20:28:06.598772 244307 start.go:96] Skipping create...Using existing machine configuration
I0830 20:28:06.598776 244307 fix.go:54] fixHost starting: m03
I0830 20:28:06.599038 244307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:28:06.599071 244307 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:28:06.614301 244307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
I0830 20:28:06.614773 244307 main.go:141] libmachine: () Calling .GetVersion
I0830 20:28:06.615382 244307 main.go:141] libmachine: Using API Version 1
I0830 20:28:06.615403 244307 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:28:06.615702 244307 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:28:06.615920 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:06.616060 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetState
I0830 20:28:06.617687 244307 fix.go:102] recreateIfNeeded on multinode-944570-m03: state=Stopped err=<nil>
I0830 20:28:06.617717 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
W0830 20:28:06.617886 244307 fix.go:128] unexpected machine state, will restart: <nil>
I0830 20:28:06.619821 244307 out.go:177] * Restarting existing kvm2 VM for "multinode-944570-m03" ...
I0830 20:28:06.621392 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .Start
I0830 20:28:06.621583 244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring networks are active...
I0830 20:28:06.622306 244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring network default is active
I0830 20:28:06.622618 244307 main.go:141] libmachine: (multinode-944570-m03) Ensuring network mk-multinode-944570 is active
I0830 20:28:06.622938 244307 main.go:141] libmachine: (multinode-944570-m03) Getting domain xml...
I0830 20:28:06.623618 244307 main.go:141] libmachine: (multinode-944570-m03) Creating domain...
I0830 20:28:07.885177 244307 main.go:141] libmachine: (multinode-944570-m03) Waiting to get IP...
I0830 20:28:07.886080 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.886564 244307 main.go:141] libmachine: (multinode-944570-m03) Found IP for machine: 192.168.39.83
I0830 20:28:07.886598 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has current primary IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.886610 244307 main.go:141] libmachine: (multinode-944570-m03) Reserving static IP address...
I0830 20:28:07.887023 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "multinode-944570-m03", mac: "52:54:00:21:38:ac", ip: "192.168.39.83"} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:27:24 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:07.887056 244307 main.go:141] libmachine: (multinode-944570-m03) Reserved static IP address: 192.168.39.83
I0830 20:28:07.887076 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | skip adding static IP to network mk-multinode-944570 - found existing host DHCP lease matching {name: "multinode-944570-m03", mac: "52:54:00:21:38:ac", ip: "192.168.39.83"}
I0830 20:28:07.887096 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Getting to WaitForSSH function...
I0830 20:28:07.887114 244307 main.go:141] libmachine: (multinode-944570-m03) Waiting for SSH to be available...
I0830 20:28:07.889355 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.889760 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:27:24 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:07.889805 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:07.889875 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Using SSH client type: external
I0830 20:28:07.889913 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa (-rw-------)
I0830 20:28:07.889955 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0830 20:28:07.889971 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | About to run SSH command:
I0830 20:28:07.889986 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | exit 0
I0830 20:28:19.990768 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | SSH cmd err, output: <nil>:
I0830 20:28:19.991228 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetConfigRaw
I0830 20:28:19.992027 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:19.994736 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.995178 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:19.995218 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.995566 244307 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:28:19.995804 244307 machine.go:88] provisioning docker machine ...
I0830 20:28:19.995826 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:19.996062 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:19.996233 244307 buildroot.go:166] provisioning hostname "multinode-944570-m03"
I0830 20:28:19.996251 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:19.996393 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:19.998799 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.999129 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:19.999158 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:19.999322 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:19.999531 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:19.999724 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:19.999869 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.000039 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.000672 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.000697 244307 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-944570-m03 && echo "multinode-944570-m03" | sudo tee /etc/hostname
I0830 20:28:20.138639 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-944570-m03
I0830 20:28:20.138679 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.141577 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.142086 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.142129 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.142250 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.142466 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.142639 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.142749 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.142907 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.143334 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.143352 244307 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-944570-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-944570-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-944570-m03' | sudo tee -a /etc/hosts;
fi
fi
I0830 20:28:20.266328 244307 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0830 20:28:20.266356 244307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17145-222139/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-222139/.minikube}
I0830 20:28:20.266393 244307 buildroot.go:174] setting up certificates
I0830 20:28:20.266406 244307 provision.go:83] configureAuth start
I0830 20:28:20.266420 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetMachineName
I0830 20:28:20.266734 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:20.269497 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.269864 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.269904 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.270090 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.272135 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.272553 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.272582 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.272708 244307 provision.go:138] copyHostCerts
I0830 20:28:20.272767 244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem, removing ...
I0830 20:28:20.272777 244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
I0830 20:28:20.272844 244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem (1123 bytes)
I0830 20:28:20.272966 244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem, removing ...
I0830 20:28:20.272976 244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
I0830 20:28:20.273002 244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem (1675 bytes)
I0830 20:28:20.273067 244307 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem, removing ...
I0830 20:28:20.273074 244307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
I0830 20:28:20.273094 244307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem (1082 bytes)
I0830 20:28:20.273172 244307 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem org=jenkins.multinode-944570-m03 san=[192.168.39.83 192.168.39.83 localhost 127.0.0.1 minikube multinode-944570-m03]
I0830 20:28:20.393764 244307 provision.go:172] copyRemoteCerts
I0830 20:28:20.393820 244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0830 20:28:20.393844 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.396496 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.396831 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.396864 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.397040 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.397257 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.397412 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.397568 244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:20.484425 244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0830 20:28:20.505011 244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0830 20:28:20.525576 244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0830 20:28:20.545797 244307 provision.go:86] duration metric: configureAuth took 279.365155ms
I0830 20:28:20.545834 244307 buildroot.go:189] setting minikube options for container-runtime
I0830 20:28:20.546069 244307 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:28:20.546094 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:20.546398 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.549013 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.549347 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.549377 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.549558 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.549744 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.549908 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.550025 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.550201 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.550580 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.550592 244307 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0830 20:28:20.669126 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0830 20:28:20.669159 244307 buildroot.go:70] root file system type: tmpfs
I0830 20:28:20.669312 244307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0830 20:28:20.669338 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.671868 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.672232 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.672257 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.672449 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.672640 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.672815 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.672955 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.673120 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.673795 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.673892 244307 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0830 20:28:20.799169 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0830 20:28:20.799231 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:20.802123 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.802501 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:20.802530 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:20.802699 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:20.802869 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.803010 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:20.803149 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:20.803394 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:20.803892 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:20.803918 244307 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0830 20:28:21.578444 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0830 20:28:21.578469 244307 machine.go:91] provisioned docker machine in 1.582651123s
I0830 20:28:21.578480 244307 start.go:300] post-start starting for "multinode-944570-m03" (driver="kvm2")
I0830 20:28:21.578490 244307 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0830 20:28:21.578511 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.578900 244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0830 20:28:21.578942 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.581578 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.581969 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.581997 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.582131 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.582369 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.582565 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.582749 244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.668786 244307 ssh_runner.go:195] Run: cat /etc/os-release
I0830 20:28:21.672898 244307 info.go:137] Remote host: Buildroot 2021.02.12
I0830 20:28:21.672928 244307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/addons for local assets ...
I0830 20:28:21.673010 244307 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/files for local assets ...
I0830 20:28:21.673094 244307 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> 2293472.pem in /etc/ssl/certs
I0830 20:28:21.673181 244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0830 20:28:21.682570 244307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /etc/ssl/certs/2293472.pem (1708 bytes)
I0830 20:28:21.702791 244307 start.go:303] post-start completed in 124.296018ms
I0830 20:28:21.702818 244307 fix.go:56] fixHost completed within 15.104040753s
I0830 20:28:21.702845 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.705614 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.706051 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.706103 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.706277 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.706472 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.706649 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.706796 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.706949 244307 main.go:141] libmachine: Using SSH client type: native
I0830 20:28:21.707369 244307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.83 22 <nil> <nil>}
I0830 20:28:21.707382 244307 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0830 20:28:21.819938 244307 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693427301.771008702
I0830 20:28:21.819966 244307 fix.go:206] guest clock: 1693427301.771008702
I0830 20:28:21.819973 244307 fix.go:219] Guest: 2023-08-30 20:28:21.771008702 +0000 UTC Remote: 2023-08-30 20:28:21.702822945 +0000 UTC m=+15.165600981 (delta=68.185757ms)
I0830 20:28:21.819993 244307 fix.go:190] guest clock delta is within tolerance: 68.185757ms
I0830 20:28:21.819998 244307 start.go:83] releasing machines lock for "multinode-944570-m03", held for 15.221231305s
I0830 20:28:21.820019 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.820357 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetIP
I0830 20:28:21.823024 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.823407 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.823431 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.823638 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824224 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824406 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .DriverName
I0830 20:28:21.824518 244307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0830 20:28:21.824558 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.824642 244307 ssh_runner.go:195] Run: systemctl --version
I0830 20:28:21.824671 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHHostname
I0830 20:28:21.827280 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827583 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827738 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.827775 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.827921 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.828041 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:38:ac", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:28:18 +0000 UTC Type:0 Mac:52:54:00:21:38:ac Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-944570-m03 Clientid:01:52:54:00:21:38:ac}
I0830 20:28:21.828081 244307 main.go:141] libmachine: (multinode-944570-m03) DBG | domain multinode-944570-m03 has defined IP address 192.168.39.83 and MAC address 52:54:00:21:38:ac in network mk-multinode-944570
I0830 20:28:21.828087 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.828217 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHPort
I0830 20:28:21.828321 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.828348 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHKeyPath
I0830 20:28:21.828494 244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.828560 244307 main.go:141] libmachine: (multinode-944570-m03) Calling .GetSSHUsername
I0830 20:28:21.828679 244307 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m03/id_rsa Username:docker}
I0830 20:28:21.961459 244307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0830 20:28:21.966821 244307 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0830 20:28:21.966920 244307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0830 20:28:21.981519 244307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0830 20:28:21.981544 244307 start.go:466] detecting cgroup driver to use...
I0830 20:28:21.981698 244307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:28:21.998451 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0830 20:28:22.007411 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0830 20:28:22.016484 244307 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0830 20:28:22.016544 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0830 20:28:22.025752 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:28:22.034759 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0830 20:28:22.043923 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:28:22.052964 244307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0830 20:28:22.062283 244307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0830 20:28:22.071333 244307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0830 20:28:22.079597 244307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0830 20:28:22.087564 244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:22.189186 244307 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0830 20:28:22.206890 244307 start.go:466] detecting cgroup driver to use...
I0830 20:28:22.206994 244307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0830 20:28:22.220310 244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:28:22.231888 244307 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0830 20:28:22.247009 244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:28:22.258664 244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:28:22.269656 244307 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0830 20:28:22.300955 244307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:28:22.312769 244307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:28:22.329383 244307 ssh_runner.go:195] Run: which cri-dockerd
I0830 20:28:22.332782 244307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0830 20:28:22.340908 244307 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0830 20:28:22.354724 244307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0830 20:28:22.470530 244307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0830 20:28:22.573526 244307 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0830 20:28:22.573569 244307 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0830 20:28:22.590701 244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:22.696373 244307 ssh_runner.go:195] Run: sudo systemctl restart docker
I0830 20:28:24.102132 244307 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405720981s)
I0830 20:28:24.102211 244307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:28:24.213758 244307 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0830 20:28:24.331361 244307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:28:24.437979 244307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:28:24.557719 244307 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0830 20:28:24.572334 244307 out.go:177]
W0830 20:28:24.573820 244307 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0830 20:28:24.573836 244307 out.go:239] *
*
W0830 20:28:24.576192 244307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0830 20:28:24.577631 244307 out.go:177]
multinode_test.go:257: node start returned an error. args "out/minikube-linux-amd64 -p multinode-944570 node start m03 --alsologtostderr": exit status 90
multinode_test.go:261: (dbg) Run: out/minikube-linux-amd64 -p multinode-944570 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-944570 status: exit status 2 (579.416171ms)
-- stdout --
multinode-944570
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
multinode-944570-m02
type: Worker
host: Running
kubelet: Running
multinode-944570-m03
type: Worker
host: Running
kubelet: Stopped
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-944570 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-944570 -n multinode-944570
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-944570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-944570 logs -n 25: (1.088354527s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| cp | multinode-944570 cp multinode-944570:/home/docker/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | multinode-944570-m03:/home/docker/cp-test_multinode-944570_multinode-944570-m03.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | multinode-944570 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-944570 ssh -n multinode-944570-m03 sudo cat | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | /home/docker/cp-test_multinode-944570_multinode-944570-m03.txt | | | | | |
| cp | multinode-944570 cp testdata/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | multinode-944570-m02:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | multinode-944570-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | /tmp/TestMultiNodeserialCopyFile109421544/001/cp-test_multinode-944570-m02.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | multinode-944570-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | multinode-944570:/home/docker/cp-test_multinode-944570-m02_multinode-944570.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | multinode-944570-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-944570 ssh -n multinode-944570 sudo cat | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | /home/docker/cp-test_multinode-944570-m02_multinode-944570.txt | | | | | |
| cp | multinode-944570 cp multinode-944570-m02:/home/docker/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:27 UTC |
| | multinode-944570-m03:/home/docker/cp-test_multinode-944570-m02_multinode-944570-m03.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:27 UTC | 30 Aug 23 20:28 UTC |
| | multinode-944570-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-944570 ssh -n multinode-944570-m03 sudo cat | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | /home/docker/cp-test_multinode-944570-m02_multinode-944570-m03.txt | | | | | |
| cp | multinode-944570 cp testdata/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | multinode-944570-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | multinode-944570-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | /tmp/TestMultiNodeserialCopyFile109421544/001/cp-test_multinode-944570-m03.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | multinode-944570-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | multinode-944570:/home/docker/cp-test_multinode-944570-m03_multinode-944570.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | multinode-944570-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-944570 ssh -n multinode-944570 sudo cat | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | /home/docker/cp-test_multinode-944570-m03_multinode-944570.txt | | | | | |
| cp | multinode-944570 cp multinode-944570-m03:/home/docker/cp-test.txt | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | multinode-944570-m02:/home/docker/cp-test_multinode-944570-m03_multinode-944570-m02.txt | | | | | |
| ssh | multinode-944570 ssh -n | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | multinode-944570-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-944570 ssh -n multinode-944570-m02 sudo cat | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| | /home/docker/cp-test_multinode-944570-m03_multinode-944570-m02.txt | | | | | |
| node | multinode-944570 node stop m03 | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | 30 Aug 23 20:28 UTC |
| node | multinode-944570 node start | multinode-944570 | jenkins | v1.31.2 | 30 Aug 23 20:28 UTC | |
| | m03 --alsologtostderr | | | | | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/08/30 20:24:38
Running on machine: ubuntu-20-agent-9
Binary: Built with gc go1.20.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0830 20:24:38.237538 241645 out.go:296] Setting OutFile to fd 1 ...
I0830 20:24:38.237679 241645 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:24:38.237690 241645 out.go:309] Setting ErrFile to fd 2...
I0830 20:24:38.237697 241645 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 20:24:38.237919 241645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-222139/.minikube/bin
I0830 20:24:38.238591 241645 out.go:303] Setting JSON to false
I0830 20:24:38.239555 241645 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7620,"bootTime":1693419458,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0830 20:24:38.239616 241645 start.go:138] virtualization: kvm guest
I0830 20:24:38.241906 241645 out.go:177] * [multinode-944570] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0830 20:24:38.244008 241645 out.go:177] - MINIKUBE_LOCATION=17145
I0830 20:24:38.244037 241645 notify.go:220] Checking for updates...
I0830 20:24:38.245609 241645 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0830 20:24:38.247196 241645 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17145-222139/kubeconfig
I0830 20:24:38.248684 241645 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-222139/.minikube
I0830 20:24:38.250032 241645 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0830 20:24:38.251947 241645 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0830 20:24:38.253529 241645 driver.go:373] Setting default libvirt URI to qemu:///system
I0830 20:24:38.288180 241645 out.go:177] * Using the kvm2 driver based on user configuration
I0830 20:24:38.289569 241645 start.go:298] selected driver: kvm2
I0830 20:24:38.289588 241645 start.go:902] validating driver "kvm2" against <nil>
I0830 20:24:38.289603 241645 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0830 20:24:38.290690 241645 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0830 20:24:38.290811 241645 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17145-222139/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0830 20:24:38.310813 241645 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I0830 20:24:38.310865 241645 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0830 20:24:38.311070 241645 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0830 20:24:38.311106 241645 cni.go:84] Creating CNI manager for ""
I0830 20:24:38.311119 241645 cni.go:136] 0 nodes found, recommending kindnet
I0830 20:24:38.311124 241645 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
I0830 20:24:38.311134 241645 start_flags.go:319] config:
{Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0830 20:24:38.311268 241645 iso.go:125] acquiring lock: {Name:mk193fbe19fd874a72f32d45bb0f490410c0429c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0830 20:24:38.313041 241645 out.go:177] * Starting control plane node multinode-944570 in cluster multinode-944570
I0830 20:24:38.314356 241645 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0830 20:24:38.314383 241645 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4
I0830 20:24:38.314391 241645 cache.go:57] Caching tarball of preloaded images
I0830 20:24:38.314457 241645 preload.go:174] Found /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0830 20:24:38.314467 241645 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on docker
I0830 20:24:38.314760 241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:24:38.314780 241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json: {Name:mk4f0b9157dab9cab07456fdbb9784414d74dbfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:24:38.314908 241645 start.go:365] acquiring machines lock for multinode-944570: {Name:mk9a092bb7d2f42c1b785aa1d546d37ad26cec77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0830 20:24:38.314938 241645 start.go:369] acquired machines lock for "multinode-944570" in 15.217µs
I0830 20:24:38.314954 241645 start.go:93] Provisioning new machine with config: &{Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0830 20:24:38.315006 241645 start.go:125] createHost starting for "" (driver="kvm2")
I0830 20:24:38.316725 241645 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0830 20:24:38.316841 241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:24:38.316868 241645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:24:38.330465 241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
I0830 20:24:38.330868 241645 main.go:141] libmachine: () Calling .GetVersion
I0830 20:24:38.331433 241645 main.go:141] libmachine: Using API Version 1
I0830 20:24:38.331457 241645 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:24:38.332665 241645 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:24:38.333120 241645 main.go:141] libmachine: (multinode-944570) Calling .GetMachineName
I0830 20:24:38.334046 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:24:38.334277 241645 start.go:159] libmachine.API.Create for "multinode-944570" (driver="kvm2")
I0830 20:24:38.334312 241645 client.go:168] LocalClient.Create starting
I0830 20:24:38.334342 241645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem
I0830 20:24:38.334388 241645 main.go:141] libmachine: Decoding PEM data...
I0830 20:24:38.334411 241645 main.go:141] libmachine: Parsing certificate...
I0830 20:24:38.334477 241645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem
I0830 20:24:38.334504 241645 main.go:141] libmachine: Decoding PEM data...
I0830 20:24:38.334520 241645 main.go:141] libmachine: Parsing certificate...
I0830 20:24:38.334544 241645 main.go:141] libmachine: Running pre-create checks...
I0830 20:24:38.334557 241645 main.go:141] libmachine: (multinode-944570) Calling .PreCreateCheck
I0830 20:24:38.334891 241645 main.go:141] libmachine: (multinode-944570) Calling .GetConfigRaw
I0830 20:24:38.335339 241645 main.go:141] libmachine: Creating machine...
I0830 20:24:38.335356 241645 main.go:141] libmachine: (multinode-944570) Calling .Create
I0830 20:24:38.335506 241645 main.go:141] libmachine: (multinode-944570) Creating KVM machine...
I0830 20:24:38.336846 241645 main.go:141] libmachine: (multinode-944570) DBG | found existing default KVM network
I0830 20:24:38.337568 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.337425 241668 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029a00}
I0830 20:24:38.342513 241645 main.go:141] libmachine: (multinode-944570) DBG | trying to create private KVM network mk-multinode-944570 192.168.39.0/24...
I0830 20:24:38.421481 241645 main.go:141] libmachine: (multinode-944570) DBG | private KVM network mk-multinode-944570 192.168.39.0/24 created
I0830 20:24:38.421520 241645 main.go:141] libmachine: (multinode-944570) Setting up store path in /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570 ...
I0830 20:24:38.421538 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.421419 241668 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17145-222139/.minikube
I0830 20:24:38.421566 241645 main.go:141] libmachine: (multinode-944570) Building disk image from file:///home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
I0830 20:24:38.421589 241645 main.go:141] libmachine: (multinode-944570) Downloading /home/jenkins/minikube-integration/17145-222139/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
I0830 20:24:38.658920 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.658756 241668 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa...
I0830 20:24:38.856798 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.856648 241668 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/multinode-944570.rawdisk...
I0830 20:24:38.856835 241645 main.go:141] libmachine: (multinode-944570) DBG | Writing magic tar header
I0830 20:24:38.856851 241645 main.go:141] libmachine: (multinode-944570) DBG | Writing SSH key tar header
I0830 20:24:38.856863 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:38.856774 241668 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570 ...
I0830 20:24:38.856875 241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570
I0830 20:24:38.856924 241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570 (perms=drwx------)
I0830 20:24:38.856947 241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube/machines (perms=drwxr-xr-x)
I0830 20:24:38.856965 241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube (perms=drwxr-xr-x)
I0830 20:24:38.856989 241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139 (perms=drwxrwxr-x)
I0830 20:24:38.857002 241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube/machines
I0830 20:24:38.857013 241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0830 20:24:38.857021 241645 main.go:141] libmachine: (multinode-944570) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0830 20:24:38.857028 241645 main.go:141] libmachine: (multinode-944570) Creating domain...
I0830 20:24:38.857035 241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube
I0830 20:24:38.857043 241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139
I0830 20:24:38.857049 241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0830 20:24:38.857056 241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home/jenkins
I0830 20:24:38.857062 241645 main.go:141] libmachine: (multinode-944570) DBG | Checking permissions on dir: /home
I0830 20:24:38.857070 241645 main.go:141] libmachine: (multinode-944570) DBG | Skipping /home - not owner
I0830 20:24:38.858299 241645 main.go:141] libmachine: (multinode-944570) define libvirt domain using xml:
I0830 20:24:38.858338 241645 main.go:141] libmachine: (multinode-944570) <domain type='kvm'>
I0830 20:24:38.858359 241645 main.go:141] libmachine: (multinode-944570) <name>multinode-944570</name>
I0830 20:24:38.858373 241645 main.go:141] libmachine: (multinode-944570) <memory unit='MiB'>2200</memory>
I0830 20:24:38.858380 241645 main.go:141] libmachine: (multinode-944570) <vcpu>2</vcpu>
I0830 20:24:38.858385 241645 main.go:141] libmachine: (multinode-944570) <features>
I0830 20:24:38.858391 241645 main.go:141] libmachine: (multinode-944570) <acpi/>
I0830 20:24:38.858398 241645 main.go:141] libmachine: (multinode-944570) <apic/>
I0830 20:24:38.858404 241645 main.go:141] libmachine: (multinode-944570) <pae/>
I0830 20:24:38.858413 241645 main.go:141] libmachine: (multinode-944570)
I0830 20:24:38.858425 241645 main.go:141] libmachine: (multinode-944570) </features>
I0830 20:24:38.858439 241645 main.go:141] libmachine: (multinode-944570) <cpu mode='host-passthrough'>
I0830 20:24:38.858468 241645 main.go:141] libmachine: (multinode-944570)
I0830 20:24:38.858493 241645 main.go:141] libmachine: (multinode-944570) </cpu>
I0830 20:24:38.858508 241645 main.go:141] libmachine: (multinode-944570) <os>
I0830 20:24:38.858524 241645 main.go:141] libmachine: (multinode-944570) <type>hvm</type>
I0830 20:24:38.858539 241645 main.go:141] libmachine: (multinode-944570) <boot dev='cdrom'/>
I0830 20:24:38.858552 241645 main.go:141] libmachine: (multinode-944570) <boot dev='hd'/>
I0830 20:24:38.858566 241645 main.go:141] libmachine: (multinode-944570) <bootmenu enable='no'/>
I0830 20:24:38.858578 241645 main.go:141] libmachine: (multinode-944570) </os>
I0830 20:24:38.858596 241645 main.go:141] libmachine: (multinode-944570) <devices>
I0830 20:24:38.858617 241645 main.go:141] libmachine: (multinode-944570) <disk type='file' device='cdrom'>
I0830 20:24:38.858636 241645 main.go:141] libmachine: (multinode-944570) <source file='/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/boot2docker.iso'/>
I0830 20:24:38.858652 241645 main.go:141] libmachine: (multinode-944570) <target dev='hdc' bus='scsi'/>
I0830 20:24:38.858667 241645 main.go:141] libmachine: (multinode-944570) <readonly/>
I0830 20:24:38.858679 241645 main.go:141] libmachine: (multinode-944570) </disk>
I0830 20:24:38.858695 241645 main.go:141] libmachine: (multinode-944570) <disk type='file' device='disk'>
I0830 20:24:38.858710 241645 main.go:141] libmachine: (multinode-944570) <driver name='qemu' type='raw' cache='default' io='threads' />
I0830 20:24:38.858779 241645 main.go:141] libmachine: (multinode-944570) <source file='/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/multinode-944570.rawdisk'/>
I0830 20:24:38.858815 241645 main.go:141] libmachine: (multinode-944570) <target dev='hda' bus='virtio'/>
I0830 20:24:38.858831 241645 main.go:141] libmachine: (multinode-944570) </disk>
I0830 20:24:38.858844 241645 main.go:141] libmachine: (multinode-944570) <interface type='network'>
I0830 20:24:38.858859 241645 main.go:141] libmachine: (multinode-944570) <source network='mk-multinode-944570'/>
I0830 20:24:38.858871 241645 main.go:141] libmachine: (multinode-944570) <model type='virtio'/>
I0830 20:24:38.858884 241645 main.go:141] libmachine: (multinode-944570) </interface>
I0830 20:24:38.858896 241645 main.go:141] libmachine: (multinode-944570) <interface type='network'>
I0830 20:24:38.858924 241645 main.go:141] libmachine: (multinode-944570) <source network='default'/>
I0830 20:24:38.858948 241645 main.go:141] libmachine: (multinode-944570) <model type='virtio'/>
I0830 20:24:38.858964 241645 main.go:141] libmachine: (multinode-944570) </interface>
I0830 20:24:38.858976 241645 main.go:141] libmachine: (multinode-944570) <serial type='pty'>
I0830 20:24:38.858990 241645 main.go:141] libmachine: (multinode-944570) <target port='0'/>
I0830 20:24:38.859001 241645 main.go:141] libmachine: (multinode-944570) </serial>
I0830 20:24:38.859021 241645 main.go:141] libmachine: (multinode-944570) <console type='pty'>
I0830 20:24:38.859037 241645 main.go:141] libmachine: (multinode-944570) <target type='serial' port='0'/>
I0830 20:24:38.859055 241645 main.go:141] libmachine: (multinode-944570) </console>
I0830 20:24:38.859067 241645 main.go:141] libmachine: (multinode-944570) <rng model='virtio'>
I0830 20:24:38.859082 241645 main.go:141] libmachine: (multinode-944570) <backend model='random'>/dev/random</backend>
I0830 20:24:38.859092 241645 main.go:141] libmachine: (multinode-944570) </rng>
I0830 20:24:38.859104 241645 main.go:141] libmachine: (multinode-944570)
I0830 20:24:38.859118 241645 main.go:141] libmachine: (multinode-944570)
I0830 20:24:38.859131 241645 main.go:141] libmachine: (multinode-944570) </devices>
I0830 20:24:38.859142 241645 main.go:141] libmachine: (multinode-944570) </domain>
I0830 20:24:38.859157 241645 main.go:141] libmachine: (multinode-944570)
I0830 20:24:38.863828 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:65:c5:b0 in network default
I0830 20:24:38.864343 241645 main.go:141] libmachine: (multinode-944570) Ensuring networks are active...
I0830 20:24:38.864366 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:38.865057 241645 main.go:141] libmachine: (multinode-944570) Ensuring network default is active
I0830 20:24:38.865375 241645 main.go:141] libmachine: (multinode-944570) Ensuring network mk-multinode-944570 is active
I0830 20:24:38.865863 241645 main.go:141] libmachine: (multinode-944570) Getting domain xml...
I0830 20:24:38.866477 241645 main.go:141] libmachine: (multinode-944570) Creating domain...
I0830 20:24:40.088518 241645 main.go:141] libmachine: (multinode-944570) Waiting to get IP...
I0830 20:24:40.089305 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:40.089634 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:40.089683 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:40.089612 241668 retry.go:31] will retry after 222.540492ms: waiting for machine to come up
I0830 20:24:40.314007 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:40.314535 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:40.314560 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:40.314475 241668 retry.go:31] will retry after 290.614479ms: waiting for machine to come up
I0830 20:24:40.607022 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:40.607398 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:40.607422 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:40.607367 241668 retry.go:31] will retry after 406.297764ms: waiting for machine to come up
I0830 20:24:41.014923 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:41.015410 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:41.015444 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:41.015372 241668 retry.go:31] will retry after 516.548653ms: waiting for machine to come up
I0830 20:24:41.533085 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:41.533545 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:41.533568 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:41.533486 241668 retry.go:31] will retry after 758.9067ms: waiting for machine to come up
I0830 20:24:42.293602 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:42.294014 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:42.294047 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:42.293953 241668 retry.go:31] will retry after 639.466704ms: waiting for machine to come up
I0830 20:24:42.934908 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:42.935382 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:42.935411 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:42.935332 241668 retry.go:31] will retry after 880.132321ms: waiting for machine to come up
I0830 20:24:43.817512 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:43.818048 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:43.818075 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:43.818003 241668 retry.go:31] will retry after 908.818154ms: waiting for machine to come up
I0830 20:24:44.728538 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:44.729000 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:44.729025 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:44.728941 241668 retry.go:31] will retry after 1.123347298s: waiting for machine to come up
I0830 20:24:45.854259 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:45.854692 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:45.854716 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:45.854639 241668 retry.go:31] will retry after 1.502405087s: waiting for machine to come up
I0830 20:24:47.359507 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:47.359928 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:47.359957 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:47.359886 241668 retry.go:31] will retry after 1.968504913s: waiting for machine to come up
I0830 20:24:49.330159 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:49.330610 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:49.330645 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:49.330580 241668 retry.go:31] will retry after 2.700334878s: waiting for machine to come up
I0830 20:24:52.034447 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:52.034943 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:52.034967 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:52.034881 241668 retry.go:31] will retry after 3.66452335s: waiting for machine to come up
I0830 20:24:55.702938 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:24:55.703375 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find current IP address of domain multinode-944570 in network mk-multinode-944570
I0830 20:24:55.703398 241645 main.go:141] libmachine: (multinode-944570) DBG | I0830 20:24:55.703350 241668 retry.go:31] will retry after 5.039181171s: waiting for machine to come up
I0830 20:25:00.745412 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:00.745948 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has current primary IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:00.745970 241645 main.go:141] libmachine: (multinode-944570) Found IP for machine: 192.168.39.254
I0830 20:25:00.745980 241645 main.go:141] libmachine: (multinode-944570) Reserving static IP address...
I0830 20:25:00.746463 241645 main.go:141] libmachine: (multinode-944570) DBG | unable to find host DHCP lease matching {name: "multinode-944570", mac: "52:54:00:50:42:84", ip: "192.168.39.254"} in network mk-multinode-944570
I0830 20:25:00.820259 241645 main.go:141] libmachine: (multinode-944570) DBG | Getting to WaitForSSH function...
I0830 20:25:00.820288 241645 main.go:141] libmachine: (multinode-944570) Reserved static IP address: 192.168.39.254
I0830 20:25:00.820302 241645 main.go:141] libmachine: (multinode-944570) Waiting for SSH to be available...
I0830 20:25:00.822903 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:00.823346 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:42:84}
I0830 20:25:00.823379 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:00.823556 241645 main.go:141] libmachine: (multinode-944570) DBG | Using SSH client type: external
I0830 20:25:00.823580 241645 main.go:141] libmachine: (multinode-944570) DBG | Using SSH private key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa (-rw-------)
I0830 20:25:00.823619 241645 main.go:141] libmachine: (multinode-944570) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa -p 22] /usr/bin/ssh <nil>}
I0830 20:25:00.823652 241645 main.go:141] libmachine: (multinode-944570) DBG | About to run SSH command:
I0830 20:25:00.823674 241645 main.go:141] libmachine: (multinode-944570) DBG | exit 0
I0830 20:25:00.918852 241645 main.go:141] libmachine: (multinode-944570) DBG | SSH cmd err, output: <nil>:
I0830 20:25:00.919136 241645 main.go:141] libmachine: (multinode-944570) KVM machine creation complete!
I0830 20:25:00.919547 241645 main.go:141] libmachine: (multinode-944570) Calling .GetConfigRaw
I0830 20:25:00.920154 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:00.920376 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:00.920544 241645 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0830 20:25:00.920562 241645 main.go:141] libmachine: (multinode-944570) Calling .GetState
I0830 20:25:00.922044 241645 main.go:141] libmachine: Detecting operating system of created instance...
I0830 20:25:00.922061 241645 main.go:141] libmachine: Waiting for SSH to be available...
I0830 20:25:00.922070 241645 main.go:141] libmachine: Getting to WaitForSSH function...
I0830 20:25:00.922080 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:00.924403 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:00.924775 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:00.924819 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:00.924928 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:00.925120 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:00.925287 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:00.925414 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:00.925582 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:25:00.926249 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.254 22 <nil> <nil>}
I0830 20:25:00.926271 241645 main.go:141] libmachine: About to run SSH command:
exit 0
I0830 20:25:01.054153 241645 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0830 20:25:01.054177 241645 main.go:141] libmachine: Detecting the provisioner...
I0830 20:25:01.054195 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:01.056963 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.057363 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:01.057400 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.057515 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:01.057711 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.057884 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.058065 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:01.058226 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:25:01.058617 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.254 22 <nil> <nil>}
I0830 20:25:01.058630 241645 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0830 20:25:01.191831 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-g88b5c50-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0830 20:25:01.191899 241645 main.go:141] libmachine: found compatible host: buildroot
I0830 20:25:01.191915 241645 main.go:141] libmachine: Provisioning with buildroot...
I0830 20:25:01.191931 241645 main.go:141] libmachine: (multinode-944570) Calling .GetMachineName
I0830 20:25:01.192233 241645 buildroot.go:166] provisioning hostname "multinode-944570"
I0830 20:25:01.192266 241645 main.go:141] libmachine: (multinode-944570) Calling .GetMachineName
I0830 20:25:01.192471 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:01.194982 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.195339 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:01.195370 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.195504 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:01.195701 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.195854 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.195955 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:01.196145 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:25:01.196528 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.254 22 <nil> <nil>}
I0830 20:25:01.196542 241645 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-944570 && echo "multinode-944570" | sudo tee /etc/hostname
I0830 20:25:01.338059 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-944570
I0830 20:25:01.338086 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:01.341056 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.341394 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:01.341438 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.341638 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:01.341835 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.342008 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.342173 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:01.342385 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:25:01.342775 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.254 22 <nil> <nil>}
I0830 20:25:01.342792 241645 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-944570' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-944570/g' /etc/hosts;
else
echo '127.0.1.1 multinode-944570' | sudo tee -a /etc/hosts;
fi
fi
I0830 20:25:01.483104 241645 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0830 20:25:01.483131 241645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17145-222139/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-222139/.minikube}
I0830 20:25:01.483173 241645 buildroot.go:174] setting up certificates
I0830 20:25:01.483183 241645 provision.go:83] configureAuth start
I0830 20:25:01.483195 241645 main.go:141] libmachine: (multinode-944570) Calling .GetMachineName
I0830 20:25:01.483529 241645 main.go:141] libmachine: (multinode-944570) Calling .GetIP
I0830 20:25:01.486542 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.486968 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:01.487005 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.487211 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:01.489871 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.490219 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:01.490270 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.490408 241645 provision.go:138] copyHostCerts
I0830 20:25:01.490452 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
I0830 20:25:01.490491 241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem, removing ...
I0830 20:25:01.490503 241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
I0830 20:25:01.490583 241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem (1123 bytes)
I0830 20:25:01.490707 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
I0830 20:25:01.490735 241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem, removing ...
I0830 20:25:01.490742 241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
I0830 20:25:01.490783 241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem (1675 bytes)
I0830 20:25:01.490844 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
I0830 20:25:01.490866 241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem, removing ...
I0830 20:25:01.490875 241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
I0830 20:25:01.490906 241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem (1082 bytes)
I0830 20:25:01.490969 241645 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem org=jenkins.multinode-944570 san=[192.168.39.254 192.168.39.254 localhost 127.0.0.1 minikube multinode-944570]
I0830 20:25:01.709034 241645 provision.go:172] copyRemoteCerts
I0830 20:25:01.709108 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0830 20:25:01.709147 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:01.711738 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.712084 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:01.712124 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.712279 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:01.712503 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.712682 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:01.712851 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
I0830 20:25:01.809341 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0830 20:25:01.809417 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0830 20:25:01.831592 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem -> /etc/docker/server.pem
I0830 20:25:01.831657 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0830 20:25:01.853695 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0830 20:25:01.853768 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0830 20:25:01.873313 241645 provision.go:86] duration metric: configureAuth took 390.114671ms
I0830 20:25:01.873335 241645 buildroot.go:189] setting minikube options for container-runtime
I0830 20:25:01.873493 241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:25:01.873517 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:01.873813 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:01.876220 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.876551 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:01.876595 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:01.876794 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:01.876992 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.877188 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:01.877389 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:01.877561 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:25:01.877971 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.254 22 <nil> <nil>}
I0830 20:25:01.877988 241645 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0830 20:25:02.008621 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0830 20:25:02.008644 241645 buildroot.go:70] root file system type: tmpfs
I0830 20:25:02.008767 241645 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0830 20:25:02.008785 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:02.011410 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:02.011756 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:02.011782 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:02.011918 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:02.012094 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:02.012232 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:02.012360 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:02.012523 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:25:02.012908 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.254 22 <nil> <nil>}
I0830 20:25:02.012966 241645 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0830 20:25:02.156549 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0830 20:25:02.156582 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:02.159519 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:02.159924 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:02.159957 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:02.160223 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:02.160457 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:02.160635 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:02.160768 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:02.160985 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:25:02.161389 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.254 22 <nil> <nil>}
I0830 20:25:02.161408 241645 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0830 20:25:02.899373 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0830 20:25:02.899405 241645 main.go:141] libmachine: Checking connection to Docker...
I0830 20:25:02.899418 241645 main.go:141] libmachine: (multinode-944570) Calling .GetURL
I0830 20:25:02.900707 241645 main.go:141] libmachine: (multinode-944570) DBG | Using libvirt version 6000000
I0830 20:25:02.902913 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:02.903249 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:02.903277 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:02.903449 241645 main.go:141] libmachine: Docker is up and running!
I0830 20:25:02.903468 241645 main.go:141] libmachine: Reticulating splines...
I0830 20:25:02.903476 241645 client.go:171] LocalClient.Create took 24.569157111s
I0830 20:25:02.903500 241645 start.go:167] duration metric: libmachine.API.Create for "multinode-944570" took 24.569226582s
I0830 20:25:02.903510 241645 start.go:300] post-start starting for "multinode-944570" (driver="kvm2")
I0830 20:25:02.903519 241645 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0830 20:25:02.903541 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:02.903865 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0830 20:25:02.903890 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:02.906005 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:02.906328 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:02.906361 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:02.906513 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:02.906744 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:02.906942 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:02.907118 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
I0830 20:25:02.999889 241645 ssh_runner.go:195] Run: cat /etc/os-release
I0830 20:25:03.003463 241645 command_runner.go:130] > NAME=Buildroot
I0830 20:25:03.003489 241645 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
I0830 20:25:03.003493 241645 command_runner.go:130] > ID=buildroot
I0830 20:25:03.003499 241645 command_runner.go:130] > VERSION_ID=2021.02.12
I0830 20:25:03.003503 241645 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0830 20:25:03.003551 241645 info.go:137] Remote host: Buildroot 2021.02.12
I0830 20:25:03.003575 241645 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/addons for local assets ...
I0830 20:25:03.003658 241645 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/files for local assets ...
I0830 20:25:03.003750 241645 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> 2293472.pem in /etc/ssl/certs
I0830 20:25:03.003761 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> /etc/ssl/certs/2293472.pem
I0830 20:25:03.003837 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0830 20:25:03.011525 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /etc/ssl/certs/2293472.pem (1708 bytes)
I0830 20:25:03.032042 241645 start.go:303] post-start completed in 128.515897ms
I0830 20:25:03.032101 241645 main.go:141] libmachine: (multinode-944570) Calling .GetConfigRaw
I0830 20:25:03.032744 241645 main.go:141] libmachine: (multinode-944570) Calling .GetIP
I0830 20:25:03.035354 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.035725 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:03.035764 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.035980 241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:25:03.036145 241645 start.go:128] duration metric: createHost completed in 24.721130412s
I0830 20:25:03.036175 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:03.038222 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.038509 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:03.038538 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.038684 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:03.038880 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:03.039021 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:03.039182 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:03.039346 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:25:03.039785 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.254 22 <nil> <nil>}
I0830 20:25:03.039799 241645 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0830 20:25:03.171749 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693427103.145508549
I0830 20:25:03.171775 241645 fix.go:206] guest clock: 1693427103.145508549
I0830 20:25:03.171783 241645 fix.go:219] Guest: 2023-08-30 20:25:03.145508549 +0000 UTC Remote: 2023-08-30 20:25:03.036163347 +0000 UTC m=+24.831539919 (delta=109.345202ms)
I0830 20:25:03.171803 241645 fix.go:190] guest clock delta is within tolerance: 109.345202ms
I0830 20:25:03.171810 241645 start.go:83] releasing machines lock for "multinode-944570", held for 24.856863444s
I0830 20:25:03.171828 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:03.172092 241645 main.go:141] libmachine: (multinode-944570) Calling .GetIP
I0830 20:25:03.174430 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.174803 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:03.174828 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.174993 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:03.175589 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:03.175764 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:03.175840 241645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0830 20:25:03.175904 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:03.176013 241645 ssh_runner.go:195] Run: cat /version.json
I0830 20:25:03.176037 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:03.178485 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.178855 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.178876 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:03.178891 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.179065 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:03.179257 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:03.179376 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:03.179404 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:03.179413 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:03.179591 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:03.179590 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
I0830 20:25:03.179756 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:03.180044 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:03.180191 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
I0830 20:25:03.302799 241645 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0830 20:25:03.302869 241645 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
I0830 20:25:03.303015 241645 ssh_runner.go:195] Run: systemctl --version
I0830 20:25:03.307972 241645 command_runner.go:130] > systemd 247 (247)
I0830 20:25:03.308007 241645 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0830 20:25:03.308356 241645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0830 20:25:03.313222 241645 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0830 20:25:03.313435 241645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0830 20:25:03.313503 241645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0830 20:25:03.327116 241645 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0830 20:25:03.327378 241645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0830 20:25:03.327402 241645 start.go:466] detecting cgroup driver to use...
I0830 20:25:03.327619 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:25:03.343837 241645 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0830 20:25:03.344435 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0830 20:25:03.353064 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0830 20:25:03.361596 241645 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0830 20:25:03.361651 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0830 20:25:03.370242 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:25:03.378852 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0830 20:25:03.387260 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:25:03.396213 241645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0830 20:25:03.404744 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0830 20:25:03.413143 241645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0830 20:25:03.420408 241645 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0830 20:25:03.420483 241645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0830 20:25:03.427869 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:25:03.524462 241645 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0830 20:25:03.541100 241645 start.go:466] detecting cgroup driver to use...
I0830 20:25:03.541187 241645 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0830 20:25:03.563244 241645 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0830 20:25:03.563357 241645 command_runner.go:130] > [Unit]
I0830 20:25:03.563386 241645 command_runner.go:130] > Description=Docker Application Container Engine
I0830 20:25:03.563396 241645 command_runner.go:130] > Documentation=https://docs.docker.com
I0830 20:25:03.563408 241645 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0830 20:25:03.563418 241645 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0830 20:25:03.563426 241645 command_runner.go:130] > StartLimitBurst=3
I0830 20:25:03.563436 241645 command_runner.go:130] > StartLimitIntervalSec=60
I0830 20:25:03.563445 241645 command_runner.go:130] > [Service]
I0830 20:25:03.563452 241645 command_runner.go:130] > Type=notify
I0830 20:25:03.563461 241645 command_runner.go:130] > Restart=on-failure
I0830 20:25:03.563473 241645 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0830 20:25:03.563488 241645 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0830 20:25:03.563502 241645 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0830 20:25:03.563516 241645 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0830 20:25:03.563528 241645 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0830 20:25:03.563542 241645 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0830 20:25:03.563557 241645 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0830 20:25:03.563578 241645 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0830 20:25:03.563592 241645 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0830 20:25:03.563601 241645 command_runner.go:130] > ExecStart=
I0830 20:25:03.563627 241645 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0830 20:25:03.563647 241645 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0830 20:25:03.563658 241645 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0830 20:25:03.563671 241645 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0830 20:25:03.563681 241645 command_runner.go:130] > LimitNOFILE=infinity
I0830 20:25:03.563687 241645 command_runner.go:130] > LimitNPROC=infinity
I0830 20:25:03.563696 241645 command_runner.go:130] > LimitCORE=infinity
I0830 20:25:03.563708 241645 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0830 20:25:03.563720 241645 command_runner.go:130] > # Only systemd 226 and above support this version.
I0830 20:25:03.563729 241645 command_runner.go:130] > TasksMax=infinity
I0830 20:25:03.563739 241645 command_runner.go:130] > TimeoutStartSec=0
I0830 20:25:03.563751 241645 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0830 20:25:03.563761 241645 command_runner.go:130] > Delegate=yes
I0830 20:25:03.563774 241645 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0830 20:25:03.563783 241645 command_runner.go:130] > KillMode=process
I0830 20:25:03.563795 241645 command_runner.go:130] > [Install]
I0830 20:25:03.563810 241645 command_runner.go:130] > WantedBy=multi-user.target
I0830 20:25:03.564476 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:25:03.576659 241645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0830 20:25:03.592708 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:25:03.603906 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:25:03.614217 241645 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0830 20:25:03.638599 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:25:03.650369 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:25:03.665974 241645 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0830 20:25:03.666335 241645 ssh_runner.go:195] Run: which cri-dockerd
I0830 20:25:03.669618 241645 command_runner.go:130] > /usr/bin/cri-dockerd
I0830 20:25:03.669860 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0830 20:25:03.677395 241645 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0830 20:25:03.691827 241645 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0830 20:25:03.796524 241645 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0830 20:25:03.902931 241645 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0830 20:25:03.902965 241645 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0830 20:25:03.918652 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:25:04.016625 241645 ssh_runner.go:195] Run: sudo systemctl restart docker
I0830 20:25:05.368771 241645 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.352099845s)
I0830 20:25:05.368858 241645 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:25:05.466546 241645 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0830 20:25:05.576501 241645 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:25:05.684851 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:25:05.794664 241645 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0830 20:25:05.811767 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:25:05.914344 241645 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0830 20:25:05.984596 241645 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0830 20:25:05.984689 241645 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0830 20:25:05.990092 241645 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0830 20:25:05.990122 241645 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0830 20:25:05.990132 241645 command_runner.go:130] > Device: 16h/22d Inode: 906 Links: 1
I0830 20:25:05.990139 241645 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0830 20:25:05.990145 241645 command_runner.go:130] > Access: 2023-08-30 20:25:05.905320606 +0000
I0830 20:25:05.990150 241645 command_runner.go:130] > Modify: 2023-08-30 20:25:05.905320606 +0000
I0830 20:25:05.990154 241645 command_runner.go:130] > Change: 2023-08-30 20:25:05.907323346 +0000
I0830 20:25:05.990158 241645 command_runner.go:130] > Birth: -
I0830 20:25:05.990337 241645 start.go:534] Will wait 60s for crictl version
I0830 20:25:05.990399 241645 ssh_runner.go:195] Run: which crictl
I0830 20:25:05.994229 241645 command_runner.go:130] > /usr/bin/crictl
I0830 20:25:05.994314 241645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0830 20:25:06.033459 241645 command_runner.go:130] > Version: 0.1.0
I0830 20:25:06.033482 241645 command_runner.go:130] > RuntimeName: docker
I0830 20:25:06.033486 241645 command_runner.go:130] > RuntimeVersion: 24.0.5
I0830 20:25:06.033492 241645 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0830 20:25:06.033519 241645 start.go:550] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.5
RuntimeApiVersion: v1alpha2
I0830 20:25:06.033580 241645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0830 20:25:06.057956 241645 command_runner.go:130] > 24.0.5
I0830 20:25:06.058230 241645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0830 20:25:06.083787 241645 command_runner.go:130] > 24.0.5
I0830 20:25:06.086937 241645 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
I0830 20:25:06.086981 241645 main.go:141] libmachine: (multinode-944570) Calling .GetIP
I0830 20:25:06.089771 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:06.090200 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:06.090265 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:06.090492 241645 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0830 20:25:06.094327 241645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0830 20:25:06.105852 241645 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0830 20:25:06.105911 241645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0830 20:25:06.122627 241645 docker.go:636] Got preloaded images:
I0830 20:25:06.122653 241645 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
I0830 20:25:06.122742 241645 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0830 20:25:06.131502 241645 command_runner.go:139] > {"Repositories":{}}
I0830 20:25:06.131790 241645 ssh_runner.go:195] Run: which lz4
I0830 20:25:06.135029 241645 command_runner.go:130] > /usr/bin/lz4
I0830 20:25:06.135173 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0830 20:25:06.135279 241645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0830 20:25:06.139161 241645 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0830 20:25:06.139198 241645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0830 20:25:06.139217 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422113676 bytes)
I0830 20:25:07.585973 241645 docker.go:600] Took 1.450719 seconds to copy over tarball
I0830 20:25:07.586052 241645 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0830 20:25:09.823196 241645 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.237111552s)
I0830 20:25:09.823234 241645 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0830 20:25:09.862854 241645 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0830 20:25:09.871865 241645 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.1":"sha256:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2":"sha256:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.1":"sha256:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195":"sha256:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.1":"sha256:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c":"sha256:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5
ade845b500bba5"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.1":"sha256:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4":"sha256:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
I0830 20:25:09.872029 241645 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
I0830 20:25:09.886945 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:25:09.989511 241645 ssh_runner.go:195] Run: sudo systemctl restart docker
I0830 20:25:14.405230 241645 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.415676095s)
I0830 20:25:14.405322 241645 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0830 20:25:14.422787 241645 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.1
I0830 20:25:14.422809 241645 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.1
I0830 20:25:14.422815 241645 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.1
I0830 20:25:14.422827 241645 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.1
I0830 20:25:14.422831 241645 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I0830 20:25:14.422836 241645 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I0830 20:25:14.422840 241645 command_runner.go:130] > registry.k8s.io/pause:3.9
I0830 20:25:14.422845 241645 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0830 20:25:14.423955 241645 docker.go:636] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0830 20:25:14.423981 241645 cache_images.go:84] Images are preloaded, skipping loading
I0830 20:25:14.424035 241645 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0830 20:25:14.448889 241645 command_runner.go:130] > cgroupfs
I0830 20:25:14.449160 241645 cni.go:84] Creating CNI manager for ""
I0830 20:25:14.449186 241645 cni.go:136] 1 nodes found, recommending kindnet
I0830 20:25:14.449212 241645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0830 20:25:14.449243 241645 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.254 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-944570 NodeName:multinode-944570 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0830 20:25:14.449461 241645 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.254
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-944570"
kubeletExtraArgs:
node-ip: 192.168.39.254
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0830 20:25:14.449567 241645 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-944570 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.254
[Install]
config:
{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0830 20:25:14.449633 241645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
I0830 20:25:14.458160 241645 command_runner.go:130] > kubeadm
I0830 20:25:14.458178 241645 command_runner.go:130] > kubectl
I0830 20:25:14.458182 241645 command_runner.go:130] > kubelet
I0830 20:25:14.458202 241645 binaries.go:44] Found k8s binaries, skipping transfer
I0830 20:25:14.458278 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0830 20:25:14.466023 241645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
I0830 20:25:14.480270 241645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0830 20:25:14.494299 241645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
I0830 20:25:14.508858 241645 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0830 20:25:14.512350 241645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0830 20:25:14.523664 241645 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570 for IP: 192.168.39.254
I0830 20:25:14.523700 241645 certs.go:190] acquiring lock for shared ca certs: {Name:mk1ac5fe312bfdaa0e7afaffac50c875afeaeaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:14.523876 241645 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key
I0830 20:25:14.523917 241645 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key
I0830 20:25:14.523955 241645 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key
I0830 20:25:14.523971 241645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt with IP's: []
I0830 20:25:14.604845 241645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt ...
I0830 20:25:14.604876 241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt: {Name:mk3a81bce3b329f75a188d0b1d2532a803bc802a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:14.605076 241645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key ...
I0830 20:25:14.605091 241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key: {Name:mk274cc8b2182f52eba6fef4283857d540e33f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:14.605185 241645 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key.9e1cae77
I0830 20:25:14.605200 241645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt.9e1cae77 with IP's: [192.168.39.254 10.96.0.1 127.0.0.1 10.0.0.1]
I0830 20:25:14.697642 241645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt.9e1cae77 ...
I0830 20:25:14.697675 241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt.9e1cae77: {Name:mk8de6e98fe5500c86a02985d64d4574319c01c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:14.697886 241645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key.9e1cae77 ...
I0830 20:25:14.697902 241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key.9e1cae77: {Name:mk009aec9a9754bad8f4b6865632165d91d2d16f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:14.697991 241645 certs.go:337] copying /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt.9e1cae77 -> /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt
I0830 20:25:14.698061 241645 certs.go:341] copying /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key.9e1cae77 -> /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key
I0830 20:25:14.698118 241645 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key
I0830 20:25:14.698131 241645 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt with IP's: []
I0830 20:25:14.888701 241645 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt ...
I0830 20:25:14.888734 241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt: {Name:mk023f547b5553e72f5c740f1d18b5133c723004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:14.888909 241645 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key ...
I0830 20:25:14.888920 241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key: {Name:mkd9897d466f60278c59be0457ce47ee40541bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:14.888988 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0830 20:25:14.889005 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0830 20:25:14.889016 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0830 20:25:14.889028 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0830 20:25:14.889041 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0830 20:25:14.889052 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0830 20:25:14.889065 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0830 20:25:14.889077 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0830 20:25:14.889126 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem (1338 bytes)
W0830 20:25:14.889163 241645 certs.go:433] ignoring /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347_empty.pem, impossibly tiny 0 bytes
I0830 20:25:14.889171 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem (1679 bytes)
I0830 20:25:14.889197 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem (1082 bytes)
I0830 20:25:14.889220 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem (1123 bytes)
I0830 20:25:14.889243 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem (1675 bytes)
I0830 20:25:14.889279 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem (1708 bytes)
I0830 20:25:14.889303 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem -> /usr/share/ca-certificates/229347.pem
I0830 20:25:14.889316 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> /usr/share/ca-certificates/2293472.pem
I0830 20:25:14.889329 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0830 20:25:14.889883 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0830 20:25:14.912136 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0830 20:25:14.932813 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0830 20:25:14.955446 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0830 20:25:14.976550 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0830 20:25:14.996785 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0830 20:25:15.017343 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0830 20:25:15.038584 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0830 20:25:15.059781 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem --> /usr/share/ca-certificates/229347.pem (1338 bytes)
I0830 20:25:15.080315 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /usr/share/ca-certificates/2293472.pem (1708 bytes)
I0830 20:25:15.101293 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0830 20:25:15.121688 241645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0830 20:25:15.135274 241645 ssh_runner.go:195] Run: openssl version
I0830 20:25:15.140405 241645 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0830 20:25:15.140482 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2293472.pem && ln -fs /usr/share/ca-certificates/2293472.pem /etc/ssl/certs/2293472.pem"
I0830 20:25:15.149304 241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2293472.pem
I0830 20:25:15.153204 241645 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 20:12 /usr/share/ca-certificates/2293472.pem
I0830 20:25:15.153401 241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 20:12 /usr/share/ca-certificates/2293472.pem
I0830 20:25:15.153451 241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2293472.pem
I0830 20:25:15.158234 241645 command_runner.go:130] > 3ec20f2e
I0830 20:25:15.158403 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2293472.pem /etc/ssl/certs/3ec20f2e.0"
I0830 20:25:15.167474 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0830 20:25:15.176508 241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0830 20:25:15.180617 241645 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 20:06 /usr/share/ca-certificates/minikubeCA.pem
I0830 20:25:15.180780 241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 20:06 /usr/share/ca-certificates/minikubeCA.pem
I0830 20:25:15.180825 241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0830 20:25:15.185637 241645 command_runner.go:130] > b5213941
I0830 20:25:15.185862 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0830 20:25:15.194768 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229347.pem && ln -fs /usr/share/ca-certificates/229347.pem /etc/ssl/certs/229347.pem"
I0830 20:25:15.203890 241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229347.pem
I0830 20:25:15.207968 241645 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 20:12 /usr/share/ca-certificates/229347.pem
I0830 20:25:15.208108 241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 20:12 /usr/share/ca-certificates/229347.pem
I0830 20:25:15.208147 241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229347.pem
I0830 20:25:15.212830 241645 command_runner.go:130] > 51391683
I0830 20:25:15.213037 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/229347.pem /etc/ssl/certs/51391683.0"
I0830 20:25:15.221723 241645 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0830 20:25:15.225370 241645 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0830 20:25:15.225403 241645 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0830 20:25:15.225458 241645 kubeadm.go:404] StartCluster: {Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0830 20:25:15.225584 241645 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0830 20:25:15.241509 241645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0830 20:25:15.249423 241645 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
I0830 20:25:15.249447 241645 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
I0830 20:25:15.249457 241645 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
I0830 20:25:15.249527 241645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0830 20:25:15.257391 241645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0830 20:25:15.265194 241645 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0830 20:25:15.265221 241645 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0830 20:25:15.265232 241645 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0830 20:25:15.265245 241645 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0830 20:25:15.265283 241645 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0830 20:25:15.265324 241645 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0830 20:25:15.592806 241645 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0830 20:25:15.592841 241645 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0830 20:25:25.871537 241645 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
I0830 20:25:25.871568 241645 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
I0830 20:25:25.871617 241645 kubeadm.go:322] [preflight] Running pre-flight checks
I0830 20:25:25.871630 241645 command_runner.go:130] > [preflight] Running pre-flight checks
I0830 20:25:25.871714 241645 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0830 20:25:25.871725 241645 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
I0830 20:25:25.871844 241645 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0830 20:25:25.871877 241645 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
I0830 20:25:25.872045 241645 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0830 20:25:25.872069 241645 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0830 20:25:25.872170 241645 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0830 20:25:25.872192 241645 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0830 20:25:25.874013 241645 out.go:204] - Generating certificates and keys ...
I0830 20:25:25.874110 241645 command_runner.go:130] > [certs] Using existing ca certificate authority
I0830 20:25:25.874122 241645 kubeadm.go:322] [certs] Using existing ca certificate authority
I0830 20:25:25.874220 241645 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0830 20:25:25.874238 241645 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0830 20:25:25.874338 241645 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
I0830 20:25:25.874347 241645 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0830 20:25:25.874422 241645 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
I0830 20:25:25.874430 241645 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0830 20:25:25.874514 241645 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
I0830 20:25:25.874524 241645 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0830 20:25:25.874593 241645 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
I0830 20:25:25.874604 241645 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0830 20:25:25.874699 241645 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
I0830 20:25:25.874715 241645 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0830 20:25:25.874881 241645 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-944570] and IPs [192.168.39.254 127.0.0.1 ::1]
I0830 20:25:25.874897 241645 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-944570] and IPs [192.168.39.254 127.0.0.1 ::1]
I0830 20:25:25.874968 241645 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
I0830 20:25:25.874975 241645 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0830 20:25:25.875140 241645 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-944570] and IPs [192.168.39.254 127.0.0.1 ::1]
I0830 20:25:25.875154 241645 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-944570] and IPs [192.168.39.254 127.0.0.1 ::1]
I0830 20:25:25.875249 241645 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
I0830 20:25:25.875260 241645 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0830 20:25:25.875354 241645 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
I0830 20:25:25.875376 241645 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0830 20:25:25.875440 241645 command_runner.go:130] > [certs] Generating "sa" key and public key
I0830 20:25:25.875448 241645 kubeadm.go:322] [certs] Generating "sa" key and public key
I0830 20:25:25.875527 241645 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0830 20:25:25.875536 241645 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0830 20:25:25.875624 241645 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0830 20:25:25.875635 241645 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0830 20:25:25.875711 241645 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0830 20:25:25.875722 241645 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0830 20:25:25.875799 241645 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0830 20:25:25.875801 241645 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0830 20:25:25.875883 241645 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0830 20:25:25.875891 241645 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0830 20:25:25.875989 241645 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0830 20:25:25.875998 241645 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0830 20:25:25.876082 241645 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0830 20:25:25.876092 241645 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0830 20:25:25.877841 241645 out.go:204] - Booting up control plane ...
I0830 20:25:25.877961 241645 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0830 20:25:25.877968 241645 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0830 20:25:25.878098 241645 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0830 20:25:25.878118 241645 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0830 20:25:25.878223 241645 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0830 20:25:25.878235 241645 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0830 20:25:25.878345 241645 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0830 20:25:25.878357 241645 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0830 20:25:25.878465 241645 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0830 20:25:25.878473 241645 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0830 20:25:25.878505 241645 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0830 20:25:25.878510 241645 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0830 20:25:25.878636 241645 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0830 20:25:25.878641 241645 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0830 20:25:25.878712 241645 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.504601 seconds
I0830 20:25:25.878718 241645 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.504601 seconds
I0830 20:25:25.878906 241645 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0830 20:25:25.878920 241645 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0830 20:25:25.879034 241645 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0830 20:25:25.879041 241645 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0830 20:25:25.879087 241645 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
I0830 20:25:25.879093 241645 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0830 20:25:25.879275 241645 command_runner.go:130] > [mark-control-plane] Marking the node multinode-944570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0830 20:25:25.879285 241645 kubeadm.go:322] [mark-control-plane] Marking the node multinode-944570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0830 20:25:25.879369 241645 command_runner.go:130] > [bootstrap-token] Using token: 0rk0ip.77qqtwy1kihykz5t
I0830 20:25:25.879383 241645 kubeadm.go:322] [bootstrap-token] Using token: 0rk0ip.77qqtwy1kihykz5t
I0830 20:25:25.881082 241645 out.go:204] - Configuring RBAC rules ...
I0830 20:25:25.881213 241645 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0830 20:25:25.881226 241645 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0830 20:25:25.881326 241645 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0830 20:25:25.881339 241645 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0830 20:25:25.881489 241645 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0830 20:25:25.881497 241645 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0830 20:25:25.881635 241645 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0830 20:25:25.881643 241645 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0830 20:25:25.881817 241645 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0830 20:25:25.881833 241645 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0830 20:25:25.881942 241645 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0830 20:25:25.881962 241645 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0830 20:25:25.882094 241645 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0830 20:25:25.882102 241645 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0830 20:25:25.882161 241645 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0830 20:25:25.882185 241645 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0830 20:25:25.882283 241645 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0830 20:25:25.882290 241645 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0830 20:25:25.882296 241645 kubeadm.go:322]
I0830 20:25:25.882374 241645 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
I0830 20:25:25.882383 241645 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0830 20:25:25.882392 241645 kubeadm.go:322]
I0830 20:25:25.882520 241645 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
I0830 20:25:25.882538 241645 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0830 20:25:25.882544 241645 kubeadm.go:322]
I0830 20:25:25.882576 241645 command_runner.go:130] > mkdir -p $HOME/.kube
I0830 20:25:25.882586 241645 kubeadm.go:322] mkdir -p $HOME/.kube
I0830 20:25:25.882671 241645 command_runner.go:130] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0830 20:25:25.882677 241645 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0830 20:25:25.882748 241645 command_runner.go:130] > sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0830 20:25:25.882761 241645 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0830 20:25:25.882776 241645 kubeadm.go:322]
I0830 20:25:25.882850 241645 command_runner.go:130] > Alternatively, if you are the root user, you can run:
I0830 20:25:25.882862 241645 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0830 20:25:25.882869 241645 kubeadm.go:322]
I0830 20:25:25.882950 241645 command_runner.go:130] > export KUBECONFIG=/etc/kubernetes/admin.conf
I0830 20:25:25.882959 241645 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0830 20:25:25.882969 241645 kubeadm.go:322]
I0830 20:25:25.883041 241645 command_runner.go:130] > You should now deploy a pod network to the cluster.
I0830 20:25:25.883058 241645 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0830 20:25:25.883150 241645 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0830 20:25:25.883158 241645 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0830 20:25:25.883263 241645 command_runner.go:130] > https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0830 20:25:25.883276 241645 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0830 20:25:25.883283 241645 kubeadm.go:322]
I0830 20:25:25.883409 241645 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
I0830 20:25:25.883423 241645 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0830 20:25:25.883515 241645 command_runner.go:130] > and service account keys on each node and then running the following as root:
I0830 20:25:25.883530 241645 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0830 20:25:25.883546 241645 kubeadm.go:322]
I0830 20:25:25.883664 241645 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0rk0ip.77qqtwy1kihykz5t \
I0830 20:25:25.883674 241645 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0rk0ip.77qqtwy1kihykz5t \
I0830 20:25:25.883818 241645 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 \
I0830 20:25:25.883827 241645 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 \
I0830 20:25:25.883854 241645 command_runner.go:130] > --control-plane
I0830 20:25:25.883861 241645 kubeadm.go:322] --control-plane
I0830 20:25:25.883870 241645 kubeadm.go:322]
I0830 20:25:25.883981 241645 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
I0830 20:25:25.883990 241645 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0830 20:25:25.883996 241645 kubeadm.go:322]
I0830 20:25:25.884115 241645 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0rk0ip.77qqtwy1kihykz5t \
I0830 20:25:25.884125 241645 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0rk0ip.77qqtwy1kihykz5t \
I0830 20:25:25.884249 241645 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6
I0830 20:25:25.884275 241645 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6
I0830 20:25:25.884288 241645 cni.go:84] Creating CNI manager for ""
I0830 20:25:25.884307 241645 cni.go:136] 1 nodes found, recommending kindnet
I0830 20:25:25.886028 241645 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0830 20:25:25.887290 241645 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0830 20:25:25.894367 241645 command_runner.go:130] > File: /opt/cni/bin/portmap
I0830 20:25:25.894392 241645 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I0830 20:25:25.894407 241645 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I0830 20:25:25.894419 241645 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0830 20:25:25.894429 241645 command_runner.go:130] > Access: 2023-08-30 20:24:50.661585107 +0000
I0830 20:25:25.894441 241645 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
I0830 20:25:25.894452 241645 command_runner.go:130] > Change: 2023-08-30 20:24:48.918585107 +0000
I0830 20:25:25.894460 241645 command_runner.go:130] > Birth: -
I0830 20:25:25.895015 241645 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
I0830 20:25:25.895031 241645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0830 20:25:25.946468 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0830 20:25:27.048421 241645 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
I0830 20:25:27.054294 241645 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
I0830 20:25:27.062899 241645 command_runner.go:130] > serviceaccount/kindnet created
I0830 20:25:27.079585 241645 command_runner.go:130] > daemonset.apps/kindnet created
I0830 20:25:27.082436 241645 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.135933063s)
I0830 20:25:27.082479 241645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0830 20:25:27.082588 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:27.082613 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588 minikube.k8s.io/name=multinode-944570 minikube.k8s.io/updated_at=2023_08_30T20_25_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:27.103906 241645 command_runner.go:130] > -16
I0830 20:25:27.104140 241645 ops.go:34] apiserver oom_adj: -16
I0830 20:25:27.259503 241645 command_runner.go:130] > node/multinode-944570 labeled
I0830 20:25:27.304084 241645 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
I0830 20:25:27.304257 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:27.408503 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:27.408688 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:27.505253 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:28.007756 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:28.114870 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:28.507452 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:28.596045 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:29.007855 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:29.094730 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:29.507317 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:29.582437 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:30.007615 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:30.099287 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:30.508014 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:30.598144 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:31.007849 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:31.096085 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:31.507378 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:31.607568 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:32.007769 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:32.095126 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:32.507951 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:32.600656 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:33.007821 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:33.089474 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:33.507716 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:33.584616 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:34.008071 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:34.104451 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:34.508111 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:34.600003 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:35.007552 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:35.096483 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:35.507638 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:35.596067 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:36.007719 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:36.096663 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:36.507181 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:36.600236 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:37.007888 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:37.102770 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:37.508097 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:37.605543 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:38.007169 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:38.099486 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:38.508190 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:38.727754 241645 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0830 20:25:39.007195 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0830 20:25:39.110641 241645 command_runner.go:130] > NAME SECRETS AGE
I0830 20:25:39.110674 241645 command_runner.go:130] > default 0 1s
I0830 20:25:39.112179 241645 kubeadm.go:1081] duration metric: took 12.029668645s to wait for elevateKubeSystemPrivileges.
I0830 20:25:39.112214 241645 kubeadm.go:406] StartCluster complete in 23.886760086s
I0830 20:25:39.112250 241645 settings.go:142] acquiring lock: {Name:mke973357c023e3c9107f2946103c543213b72a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:39.112344 241645 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17145-222139/kubeconfig
I0830 20:25:39.112996 241645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-222139/kubeconfig: {Name:mke2c13974c9c1f627b1ef76f3c4bc0d9584894b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:25:39.113277 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0830 20:25:39.113367 241645 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0830 20:25:39.113492 241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:25:39.113501 241645 addons.go:69] Setting default-storageclass=true in profile "multinode-944570"
I0830 20:25:39.113518 241645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-944570"
I0830 20:25:39.113494 241645 addons.go:69] Setting storage-provisioner=true in profile "multinode-944570"
I0830 20:25:39.113548 241645 addons.go:231] Setting addon storage-provisioner=true in "multinode-944570"
I0830 20:25:39.113585 241645 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17145-222139/kubeconfig
I0830 20:25:39.113610 241645 host.go:66] Checking if "multinode-944570" exists ...
I0830 20:25:39.113910 241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0830 20:25:39.114028 241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:25:39.114059 241645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:25:39.114095 241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:25:39.114148 241645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:25:39.114855 241645 cert_rotation.go:137] Starting client certificate rotation controller
I0830 20:25:39.115235 241645 round_trippers.go:463] GET https://192.168.39.254:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0830 20:25:39.115252 241645 round_trippers.go:469] Request Headers:
I0830 20:25:39.115263 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:39.115272 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:39.126389 241645 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0830 20:25:39.126416 241645 round_trippers.go:577] Response Headers:
I0830 20:25:39.126428 241645 round_trippers.go:580] Content-Length: 291
I0830 20:25:39.126437 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:39 GMT
I0830 20:25:39.126445 241645 round_trippers.go:580] Audit-Id: 7831cf68-a71e-460d-96a3-2487259424d4
I0830 20:25:39.126454 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:39.126463 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:39.126472 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:39.126485 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:39.126515 241645 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"390","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0830 20:25:39.127043 241645 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"390","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0830 20:25:39.127122 241645 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0830 20:25:39.127136 241645 round_trippers.go:469] Request Headers:
I0830 20:25:39.127147 241645 round_trippers.go:473] Content-Type: application/json
I0830 20:25:39.127160 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:39.127174 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:39.129430 241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
I0830 20:25:39.129729 241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41875
I0830 20:25:39.129931 241645 main.go:141] libmachine: () Calling .GetVersion
I0830 20:25:39.130086 241645 main.go:141] libmachine: () Calling .GetVersion
I0830 20:25:39.130495 241645 main.go:141] libmachine: Using API Version 1
I0830 20:25:39.130513 241645 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:25:39.130541 241645 main.go:141] libmachine: Using API Version 1
I0830 20:25:39.130563 241645 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:25:39.130863 241645 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:25:39.130979 241645 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:25:39.131179 241645 main.go:141] libmachine: (multinode-944570) Calling .GetState
I0830 20:25:39.131427 241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:25:39.131459 241645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:25:39.133409 241645 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17145-222139/kubeconfig
I0830 20:25:39.133768 241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0830 20:25:39.134176 241645 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
I0830 20:25:39.134201 241645 round_trippers.go:469] Request Headers:
I0830 20:25:39.134211 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:39.134222 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:39.139922 241645 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0830 20:25:39.139941 241645 round_trippers.go:577] Response Headers:
I0830 20:25:39.139948 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:39.139953 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:39.139959 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:39.139965 241645 round_trippers.go:580] Content-Length: 109
I0830 20:25:39.139973 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:39 GMT
I0830 20:25:39.139985 241645 round_trippers.go:580] Audit-Id: fff28a6e-3b77-4075-b7c6-485127c1d06c
I0830 20:25:39.139997 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:39.140017 241645 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[]}
I0830 20:25:39.140142 241645 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
I0830 20:25:39.140160 241645 round_trippers.go:577] Response Headers:
I0830 20:25:39.140169 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:39 GMT
I0830 20:25:39.140181 241645 round_trippers.go:580] Audit-Id: d6200b84-e318-4e9e-b6e1-c2c3e782b8cf
I0830 20:25:39.140192 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:39.140203 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:39.140214 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:39.140222 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:39.140229 241645 round_trippers.go:580] Content-Length: 291
I0830 20:25:39.140255 241645 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"391","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0830 20:25:39.140323 241645 addons.go:231] Setting addon default-storageclass=true in "multinode-944570"
I0830 20:25:39.140367 241645 host.go:66] Checking if "multinode-944570" exists ...
I0830 20:25:39.140399 241645 round_trippers.go:463] GET https://192.168.39.254:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0830 20:25:39.140411 241645 round_trippers.go:469] Request Headers:
I0830 20:25:39.140421 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:39.140433 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:39.140704 241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:25:39.140733 241645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:25:39.145101 241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0830 20:25:39.145124 241645 round_trippers.go:577] Response Headers:
I0830 20:25:39.145133 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:39.145139 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:39.145147 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:39.145155 241645 round_trippers.go:580] Content-Length: 291
I0830 20:25:39.145163 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:39 GMT
I0830 20:25:39.145176 241645 round_trippers.go:580] Audit-Id: 6944d652-b884-4106-8138-53b87fe4c71f
I0830 20:25:39.145189 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:39.145211 241645 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"391","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0830 20:25:39.145307 241645 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-944570" context rescaled to 1 replicas
I0830 20:25:39.145338 241645 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
I0830 20:25:39.148342 241645 out.go:177] * Verifying Kubernetes components...
I0830 20:25:39.146942 241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
I0830 20:25:39.148851 241645 main.go:141] libmachine: () Calling .GetVersion
I0830 20:25:39.150360 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0830 20:25:39.150915 241645 main.go:141] libmachine: Using API Version 1
I0830 20:25:39.150949 241645 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:25:39.151317 241645 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:25:39.151599 241645 main.go:141] libmachine: (multinode-944570) Calling .GetState
I0830 20:25:39.153583 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:39.155405 241645 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0830 20:25:39.156868 241645 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0830 20:25:39.156887 241645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0830 20:25:39.156901 241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
I0830 20:25:39.156909 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:39.157298 241645 main.go:141] libmachine: () Calling .GetVersion
I0830 20:25:39.157864 241645 main.go:141] libmachine: Using API Version 1
I0830 20:25:39.157890 241645 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:25:39.158284 241645 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:25:39.158821 241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:25:39.158860 241645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:25:39.160166 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:39.160598 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:39.160633 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:39.160818 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:39.161011 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:39.161181 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:39.161322 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
I0830 20:25:39.179849 241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44619
I0830 20:25:39.180311 241645 main.go:141] libmachine: () Calling .GetVersion
I0830 20:25:39.180972 241645 main.go:141] libmachine: Using API Version 1
I0830 20:25:39.181003 241645 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:25:39.181364 241645 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:25:39.181538 241645 main.go:141] libmachine: (multinode-944570) Calling .GetState
I0830 20:25:39.183329 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:25:39.183603 241645 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0830 20:25:39.183620 241645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0830 20:25:39.183643 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:25:39.186228 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:39.186623 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:25:39.186656 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:25:39.186838 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:25:39.187039 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:25:39.187238 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:25:39.187417 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
I0830 20:25:39.391096 241645 command_runner.go:130] > apiVersion: v1
I0830 20:25:39.391118 241645 command_runner.go:130] > data:
I0830 20:25:39.391122 241645 command_runner.go:130] > Corefile: |
I0830 20:25:39.391127 241645 command_runner.go:130] > .:53 {
I0830 20:25:39.391131 241645 command_runner.go:130] > errors
I0830 20:25:39.391136 241645 command_runner.go:130] > health {
I0830 20:25:39.391141 241645 command_runner.go:130] > lameduck 5s
I0830 20:25:39.391144 241645 command_runner.go:130] > }
I0830 20:25:39.391148 241645 command_runner.go:130] > ready
I0830 20:25:39.391160 241645 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0830 20:25:39.391164 241645 command_runner.go:130] > pods insecure
I0830 20:25:39.391169 241645 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0830 20:25:39.391174 241645 command_runner.go:130] > ttl 30
I0830 20:25:39.391177 241645 command_runner.go:130] > }
I0830 20:25:39.391181 241645 command_runner.go:130] > prometheus :9153
I0830 20:25:39.391186 241645 command_runner.go:130] > forward . /etc/resolv.conf {
I0830 20:25:39.391190 241645 command_runner.go:130] > max_concurrent 1000
I0830 20:25:39.391194 241645 command_runner.go:130] > }
I0830 20:25:39.391198 241645 command_runner.go:130] > cache 30
I0830 20:25:39.391201 241645 command_runner.go:130] > loop
I0830 20:25:39.391205 241645 command_runner.go:130] > reload
I0830 20:25:39.391211 241645 command_runner.go:130] > loadbalance
I0830 20:25:39.391215 241645 command_runner.go:130] > }
I0830 20:25:39.391220 241645 command_runner.go:130] > kind: ConfigMap
I0830 20:25:39.391228 241645 command_runner.go:130] > metadata:
I0830 20:25:39.391234 241645 command_runner.go:130] > creationTimestamp: "2023-08-30T20:25:25Z"
I0830 20:25:39.391238 241645 command_runner.go:130] > name: coredns
I0830 20:25:39.391244 241645 command_runner.go:130] > namespace: kube-system
I0830 20:25:39.391248 241645 command_runner.go:130] > resourceVersion: "266"
I0830 20:25:39.391253 241645 command_runner.go:130] > uid: 989d9dad-32d4-44a5-9cf9-98995b18ae7f
I0830 20:25:39.393916 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0830 20:25:39.394151 241645 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17145-222139/kubeconfig
I0830 20:25:39.394467 241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0830 20:25:39.394752 241645 node_ready.go:35] waiting up to 6m0s for node "multinode-944570" to be "Ready" ...
I0830 20:25:39.394841 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:39.394852 241645 round_trippers.go:469] Request Headers:
I0830 20:25:39.394865 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:39.394878 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:39.397459 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:39.397489 241645 round_trippers.go:577] Response Headers:
I0830 20:25:39.397500 241645 round_trippers.go:580] Audit-Id: cd6c7757-9b98-4f74-af25-e1cb9e4b7350
I0830 20:25:39.397512 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:39.397521 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:39.397535 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:39.397549 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:39.397562 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:39 GMT
I0830 20:25:39.397802 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:39.398624 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:39.398644 241645 round_trippers.go:469] Request Headers:
I0830 20:25:39.398655 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:39.398664 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:39.401200 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:39.401222 241645 round_trippers.go:577] Response Headers:
I0830 20:25:39.401233 241645 round_trippers.go:580] Audit-Id: 140d938d-33c7-4dfa-9917-1b0b5f9b7c2e
I0830 20:25:39.401243 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:39.401257 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:39.401270 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:39.401290 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:39.401298 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:39 GMT
I0830 20:25:39.401406 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:39.421665 241645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0830 20:25:39.487655 241645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0830 20:25:39.902243 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:39.902281 241645 round_trippers.go:469] Request Headers:
I0830 20:25:39.902293 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:39.902302 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:39.904654 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:39.904681 241645 round_trippers.go:577] Response Headers:
I0830 20:25:39.904692 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:39.904808 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:39.904829 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:39.904842 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:39 GMT
I0830 20:25:39.904854 241645 round_trippers.go:580] Audit-Id: a6887700-3e28-4e04-b319-b891b9b1d69e
I0830 20:25:39.904863 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:39.905020 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:40.284325 241645 command_runner.go:130] > configmap/coredns replaced
I0830 20:25:40.284373 241645 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0830 20:25:40.402656 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:40.402678 241645 round_trippers.go:469] Request Headers:
I0830 20:25:40.402688 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:40.402694 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:40.405191 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:40.405213 241645 round_trippers.go:577] Response Headers:
I0830 20:25:40.405223 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:40.405230 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:40.405239 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:40.405250 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:40.405258 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:40 GMT
I0830 20:25:40.405264 241645 round_trippers.go:580] Audit-Id: 8439450d-968f-4ead-969c-3a4b562f1ee3
I0830 20:25:40.405619 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:40.452449 241645 command_runner.go:130] > serviceaccount/storage-provisioner created
I0830 20:25:40.460958 241645 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
I0830 20:25:40.480685 241645 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0830 20:25:40.492917 241645 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0830 20:25:40.509256 241645 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
I0830 20:25:40.523729 241645 command_runner.go:130] > pod/storage-provisioner created
I0830 20:25:40.527054 241645 command_runner.go:130] > storageclass.storage.k8s.io/standard created
I0830 20:25:40.527096 241645 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039407422s)
I0830 20:25:40.527143 241645 main.go:141] libmachine: Making call to close driver server
I0830 20:25:40.527163 241645 main.go:141] libmachine: (multinode-944570) Calling .Close
I0830 20:25:40.527545 241645 main.go:141] libmachine: (multinode-944570) DBG | Closing plugin on server side
I0830 20:25:40.527596 241645 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:25:40.527611 241645 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:25:40.527637 241645 main.go:141] libmachine: Making call to close driver server
I0830 20:25:40.527665 241645 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.105959238s)
I0830 20:25:40.527715 241645 main.go:141] libmachine: Making call to close driver server
I0830 20:25:40.527733 241645 main.go:141] libmachine: (multinode-944570) Calling .Close
I0830 20:25:40.527673 241645 main.go:141] libmachine: (multinode-944570) Calling .Close
I0830 20:25:40.527983 241645 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:25:40.528001 241645 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:25:40.528078 241645 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:25:40.528096 241645 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:25:40.528111 241645 main.go:141] libmachine: Making call to close driver server
I0830 20:25:40.528124 241645 main.go:141] libmachine: (multinode-944570) Calling .Close
I0830 20:25:40.528178 241645 main.go:141] libmachine: Making call to close driver server
I0830 20:25:40.528188 241645 main.go:141] libmachine: (multinode-944570) Calling .Close
I0830 20:25:40.528358 241645 main.go:141] libmachine: (multinode-944570) DBG | Closing plugin on server side
I0830 20:25:40.528433 241645 main.go:141] libmachine: (multinode-944570) DBG | Closing plugin on server side
I0830 20:25:40.528450 241645 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:25:40.528472 241645 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:25:40.528556 241645 main.go:141] libmachine: Successfully made call to close driver server
I0830 20:25:40.528584 241645 main.go:141] libmachine: Making call to close connection to plugin binary
I0830 20:25:40.530496 241645 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0830 20:25:40.531932 241645 addons.go:502] enable addons completed in 1.418563326s: enabled=[storage-provisioner default-storageclass]
I0830 20:25:40.902483 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:40.902507 241645 round_trippers.go:469] Request Headers:
I0830 20:25:40.902516 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:40.902522 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:40.905485 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:40.905513 241645 round_trippers.go:577] Response Headers:
I0830 20:25:40.905523 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:40 GMT
I0830 20:25:40.905533 241645 round_trippers.go:580] Audit-Id: 6683fd26-5ef4-47ed-ae19-94ff3b0a5f4c
I0830 20:25:40.905540 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:40.905548 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:40.905555 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:40.905564 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:40.905707 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:41.402269 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:41.402295 241645 round_trippers.go:469] Request Headers:
I0830 20:25:41.402304 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:41.402310 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:41.406012 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:25:41.406040 241645 round_trippers.go:577] Response Headers:
I0830 20:25:41.406049 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:41 GMT
I0830 20:25:41.406055 241645 round_trippers.go:580] Audit-Id: af02763c-2974-4778-880b-50daaa9235fe
I0830 20:25:41.406060 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:41.406066 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:41.406071 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:41.406077 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:41.406418 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:41.406838 241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
I0830 20:25:41.902133 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:41.902158 241645 round_trippers.go:469] Request Headers:
I0830 20:25:41.902167 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:41.902175 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:41.904959 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:41.904988 241645 round_trippers.go:577] Response Headers:
I0830 20:25:41.904999 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:41.905008 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:41.905020 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:41 GMT
I0830 20:25:41.905029 241645 round_trippers.go:580] Audit-Id: 158b970e-8438-434d-8f2b-2a5319e4503d
I0830 20:25:41.905038 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:41.905045 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:41.905338 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:42.401997 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:42.402022 241645 round_trippers.go:469] Request Headers:
I0830 20:25:42.402030 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:42.402036 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:42.404707 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:42.404733 241645 round_trippers.go:577] Response Headers:
I0830 20:25:42.404743 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:42 GMT
I0830 20:25:42.404753 241645 round_trippers.go:580] Audit-Id: 49b18478-d297-4d21-82bf-80723ec332bb
I0830 20:25:42.404762 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:42.404773 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:42.404781 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:42.404790 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:42.405184 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:42.903000 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:42.903032 241645 round_trippers.go:469] Request Headers:
I0830 20:25:42.903046 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:42.903056 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:42.905834 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:42.905854 241645 round_trippers.go:577] Response Headers:
I0830 20:25:42.905861 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:42 GMT
I0830 20:25:42.905867 241645 round_trippers.go:580] Audit-Id: 087184ac-7340-433f-b160-4e451ac8c785
I0830 20:25:42.905872 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:42.905878 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:42.905884 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:42.905894 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:42.906143 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:43.402491 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:43.402511 241645 round_trippers.go:469] Request Headers:
I0830 20:25:43.402519 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:43.402526 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:43.404977 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:43.405003 241645 round_trippers.go:577] Response Headers:
I0830 20:25:43.405014 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:43.405023 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:43.405031 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:43.405039 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:43.405048 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:43 GMT
I0830 20:25:43.405055 241645 round_trippers.go:580] Audit-Id: dc72bcf1-ab08-4a6c-9838-57f349b98460
I0830 20:25:43.405267 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:43.901950 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:43.901977 241645 round_trippers.go:469] Request Headers:
I0830 20:25:43.901990 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:43.902000 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:43.905380 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:25:43.905407 241645 round_trippers.go:577] Response Headers:
I0830 20:25:43.905418 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:43.905427 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:43.905436 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:43.905443 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:43.905452 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:43 GMT
I0830 20:25:43.905460 241645 round_trippers.go:580] Audit-Id: 881598e7-6c2f-4f4a-ac62-d322414fc74d
I0830 20:25:43.905941 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:43.906328 241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
I0830 20:25:44.402302 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:44.402327 241645 round_trippers.go:469] Request Headers:
I0830 20:25:44.402338 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:44.402354 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:44.405435 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:25:44.405459 241645 round_trippers.go:577] Response Headers:
I0830 20:25:44.405466 241645 round_trippers.go:580] Audit-Id: bee4395a-00f5-4b47-95bc-56558d3ff7b5
I0830 20:25:44.405472 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:44.405477 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:44.405482 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:44.405487 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:44.405493 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:44 GMT
I0830 20:25:44.406047 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:44.902877 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:44.902905 241645 round_trippers.go:469] Request Headers:
I0830 20:25:44.902916 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:44.902925 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:44.905673 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:44.905696 241645 round_trippers.go:577] Response Headers:
I0830 20:25:44.905703 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:44.905709 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:44.905715 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:44.905720 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:44.905725 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:44 GMT
I0830 20:25:44.905734 241645 round_trippers.go:580] Audit-Id: 52ecfd25-b95a-4fca-9139-c0202a1bcf9f
I0830 20:25:44.905897 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:45.402532 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:45.402554 241645 round_trippers.go:469] Request Headers:
I0830 20:25:45.402564 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:45.402572 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:45.405687 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:25:45.405706 241645 round_trippers.go:577] Response Headers:
I0830 20:25:45.405713 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:45.405718 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:45.405725 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:45.405734 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:45.405742 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:45 GMT
I0830 20:25:45.405760 241645 round_trippers.go:580] Audit-Id: 20d0e8c8-8acb-4919-9a21-fee0d25bf906
I0830 20:25:45.406021 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:45.902795 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:45.902827 241645 round_trippers.go:469] Request Headers:
I0830 20:25:45.902840 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:45.902850 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:45.905724 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:45.905751 241645 round_trippers.go:577] Response Headers:
I0830 20:25:45.905759 241645 round_trippers.go:580] Audit-Id: 7c77e3c0-7f61-4e11-a63c-d5765e8a21a1
I0830 20:25:45.905765 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:45.905771 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:45.905776 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:45.905781 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:45.905787 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:45 GMT
I0830 20:25:45.906014 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:45.906438 241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
I0830 20:25:46.402709 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:46.402741 241645 round_trippers.go:469] Request Headers:
I0830 20:25:46.402755 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:46.402763 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:46.405368 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:46.405397 241645 round_trippers.go:577] Response Headers:
I0830 20:25:46.405409 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:46.405418 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:46.405425 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:46.405434 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:46 GMT
I0830 20:25:46.405445 241645 round_trippers.go:580] Audit-Id: 56b7412e-dd0d-4793-906d-12925a78c023
I0830 20:25:46.405458 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:46.405644 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:46.902308 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:46.902333 241645 round_trippers.go:469] Request Headers:
I0830 20:25:46.902342 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:46.902348 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:46.905209 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:46.905232 241645 round_trippers.go:577] Response Headers:
I0830 20:25:46.905244 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:46.905252 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:46.905259 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:46.905267 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:46 GMT
I0830 20:25:46.905275 241645 round_trippers.go:580] Audit-Id: 6dcd4b7f-f32a-4b09-9689-8bb582fd47fd
I0830 20:25:46.905284 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:46.905393 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:47.402231 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:47.402280 241645 round_trippers.go:469] Request Headers:
I0830 20:25:47.402293 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:47.402303 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:47.404651 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:47.404673 241645 round_trippers.go:577] Response Headers:
I0830 20:25:47.404686 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:47.404695 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:47.404702 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:47.404717 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:47.404726 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:47 GMT
I0830 20:25:47.404736 241645 round_trippers.go:580] Audit-Id: daae3920-9c7b-4f62-885f-1a22a2b62d51
I0830 20:25:47.404924 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:47.902670 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:47.902695 241645 round_trippers.go:469] Request Headers:
I0830 20:25:47.902703 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:47.902710 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:47.905509 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:47.905527 241645 round_trippers.go:577] Response Headers:
I0830 20:25:47.905535 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:47.905541 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:47.905546 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:47 GMT
I0830 20:25:47.905551 241645 round_trippers.go:580] Audit-Id: 1b3e5548-03f2-4db0-ad65-32c2f03a7955
I0830 20:25:47.905557 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:47.905566 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:47.905913 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:48.402536 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:48.402561 241645 round_trippers.go:469] Request Headers:
I0830 20:25:48.402570 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:48.402576 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:48.405230 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:48.405273 241645 round_trippers.go:577] Response Headers:
I0830 20:25:48.405285 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:48.405298 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:48.405311 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:48.405321 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:48.405334 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:48 GMT
I0830 20:25:48.405344 241645 round_trippers.go:580] Audit-Id: 666498f3-61c6-45f3-8ba8-2f142f7e0d16
I0830 20:25:48.405496 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:48.405794 241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
I0830 20:25:48.902156 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:48.902177 241645 round_trippers.go:469] Request Headers:
I0830 20:25:48.902190 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:48.902196 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:48.904770 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:48.904792 241645 round_trippers.go:577] Response Headers:
I0830 20:25:48.904801 241645 round_trippers.go:580] Audit-Id: cec37ccb-083a-4dc1-8ecd-8a07eb4a8ef6
I0830 20:25:48.904809 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:48.904817 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:48.904824 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:48.904832 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:48.904840 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:48 GMT
I0830 20:25:48.905169 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:49.402570 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:49.402603 241645 round_trippers.go:469] Request Headers:
I0830 20:25:49.402615 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:49.402622 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:49.405505 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:49.405529 241645 round_trippers.go:577] Response Headers:
I0830 20:25:49.405537 241645 round_trippers.go:580] Audit-Id: 3dcca4d9-8d36-43f6-8089-4416c0a52b54
I0830 20:25:49.405543 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:49.405548 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:49.405554 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:49.405559 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:49.405564 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:49 GMT
I0830 20:25:49.405910 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:49.902661 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:49.902692 241645 round_trippers.go:469] Request Headers:
I0830 20:25:49.902701 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:49.902711 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:49.905796 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:25:49.905815 241645 round_trippers.go:577] Response Headers:
I0830 20:25:49.905823 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:49 GMT
I0830 20:25:49.905828 241645 round_trippers.go:580] Audit-Id: 70287395-6632-4054-954e-1b7b8265acd1
I0830 20:25:49.905834 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:49.905839 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:49.905844 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:49.905850 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:49.906011 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:50.402755 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:50.402783 241645 round_trippers.go:469] Request Headers:
I0830 20:25:50.402800 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:50.402806 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:50.405308 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:50.405330 241645 round_trippers.go:577] Response Headers:
I0830 20:25:50.405340 241645 round_trippers.go:580] Audit-Id: 555d00a0-47e9-48eb-9f6e-75cb3deed87b
I0830 20:25:50.405349 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:50.405358 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:50.405369 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:50.405382 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:50.405395 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:50 GMT
I0830 20:25:50.405514 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:50.405837 241645 node_ready.go:58] node "multinode-944570" has status "Ready":"False"
I0830 20:25:50.902187 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:50.902217 241645 round_trippers.go:469] Request Headers:
I0830 20:25:50.902227 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:50.902234 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:50.904974 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:50.904994 241645 round_trippers.go:577] Response Headers:
I0830 20:25:50.905001 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:50.905006 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:50.905011 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:50.905017 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:50 GMT
I0830 20:25:50.905022 241645 round_trippers.go:580] Audit-Id: be5bf54f-9ed7-491a-81c1-2dca62f67931
I0830 20:25:50.905027 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:50.905260 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"340","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0830 20:25:51.401947 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:51.401975 241645 round_trippers.go:469] Request Headers:
I0830 20:25:51.401986 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:51.401993 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:51.404606 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:51.404635 241645 round_trippers.go:577] Response Headers:
I0830 20:25:51.404646 241645 round_trippers.go:580] Audit-Id: 998ca9c7-7c14-411a-9f17-846800bea298
I0830 20:25:51.404655 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:51.404663 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:51.404671 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:51.404679 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:51.404696 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:51 GMT
I0830 20:25:51.404842 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:51.405248 241645 node_ready.go:49] node "multinode-944570" has status "Ready":"True"
I0830 20:25:51.405272 241645 node_ready.go:38] duration metric: took 12.010502433s waiting for node "multinode-944570" to be "Ready" ...
I0830 20:25:51.405284 241645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0830 20:25:51.405739 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
I0830 20:25:51.405770 241645 round_trippers.go:469] Request Headers:
I0830 20:25:51.405784 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:51.405795 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:51.410276 241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0830 20:25:51.410301 241645 round_trippers.go:577] Response Headers:
I0830 20:25:51.410312 241645 round_trippers.go:580] Audit-Id: 318ccf73-0310-4bf4-a0ee-6ab55023c120
I0830 20:25:51.410324 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:51.410336 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:51.410344 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:51.410356 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:51.410366 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:51 GMT
I0830 20:25:51.411112 241645 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"439"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54013 chars]
I0830 20:25:51.415163 241645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace to be "Ready" ...
I0830 20:25:51.415240 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
I0830 20:25:51.415251 241645 round_trippers.go:469] Request Headers:
I0830 20:25:51.415260 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:51.415273 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:51.417466 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:51.417482 241645 round_trippers.go:577] Response Headers:
I0830 20:25:51.417492 241645 round_trippers.go:580] Audit-Id: 1f20a164-cab7-42e1-be1a-a1b0c1d0cd4f
I0830 20:25:51.417501 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:51.417510 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:51.417521 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:51.417533 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:51.417545 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:51 GMT
I0830 20:25:51.417678 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0830 20:25:51.418030 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:51.418039 241645 round_trippers.go:469] Request Headers:
I0830 20:25:51.418047 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:51.418056 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:51.420518 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:51.420538 241645 round_trippers.go:577] Response Headers:
I0830 20:25:51.420547 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:51.420555 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:51.420564 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:51.420570 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:51.420576 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:51 GMT
I0830 20:25:51.420582 241645 round_trippers.go:580] Audit-Id: 4737799e-e74d-46b8-8b93-d160240a4e8b
I0830 20:25:51.420783 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:51.421058 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
I0830 20:25:51.421069 241645 round_trippers.go:469] Request Headers:
I0830 20:25:51.421076 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:51.421082 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:51.423319 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:51.423338 241645 round_trippers.go:577] Response Headers:
I0830 20:25:51.423345 241645 round_trippers.go:580] Audit-Id: 01c00356-2e83-4f92-bfff-de45b77b2fe4
I0830 20:25:51.423371 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:51.423380 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:51.423394 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:51.423406 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:51.423414 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:51 GMT
I0830 20:25:51.423757 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0830 20:25:51.424078 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:51.424089 241645 round_trippers.go:469] Request Headers:
I0830 20:25:51.424096 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:51.424102 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:51.426234 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:51.426249 241645 round_trippers.go:577] Response Headers:
I0830 20:25:51.426258 241645 round_trippers.go:580] Audit-Id: cf575695-430b-4957-8271-7d258140d436
I0830 20:25:51.426266 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:51.426274 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:51.426283 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:51.426295 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:51.426307 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:51 GMT
I0830 20:25:51.426465 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:51.927168 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
I0830 20:25:51.927198 241645 round_trippers.go:469] Request Headers:
I0830 20:25:51.927212 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:51.927222 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:51.935139 241645 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0830 20:25:51.935167 241645 round_trippers.go:577] Response Headers:
I0830 20:25:51.935176 241645 round_trippers.go:580] Audit-Id: 88136e6f-52ef-4c98-89c2-a414b8728a84
I0830 20:25:51.935185 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:51.935198 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:51.935206 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:51.935216 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:51.935228 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:51 GMT
I0830 20:25:51.935378 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0830 20:25:51.935850 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:51.935865 241645 round_trippers.go:469] Request Headers:
I0830 20:25:51.935876 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:51.935885 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:51.938507 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:51.938527 241645 round_trippers.go:577] Response Headers:
I0830 20:25:51.938538 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:51.938547 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:51 GMT
I0830 20:25:51.938556 241645 round_trippers.go:580] Audit-Id: b57d91dc-97ce-49ce-8561-89b8e9e191a5
I0830 20:25:51.938566 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:51.938572 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:51.938580 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:51.938737 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:52.427453 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
I0830 20:25:52.427478 241645 round_trippers.go:469] Request Headers:
I0830 20:25:52.427487 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:52.427494 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:52.429902 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:52.429926 241645 round_trippers.go:577] Response Headers:
I0830 20:25:52.429936 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:52.429946 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:52.429954 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:52 GMT
I0830 20:25:52.429962 241645 round_trippers.go:580] Audit-Id: 66f717e8-665a-4771-bae2-06e3252d915e
I0830 20:25:52.429974 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:52.429986 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:52.430112 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0830 20:25:52.430595 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:52.430610 241645 round_trippers.go:469] Request Headers:
I0830 20:25:52.430617 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:52.430625 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:52.432675 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:52.432698 241645 round_trippers.go:577] Response Headers:
I0830 20:25:52.432707 241645 round_trippers.go:580] Audit-Id: 7a99663d-37d9-46ee-ad2c-7695700250f3
I0830 20:25:52.432716 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:52.432728 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:52.432736 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:52.432748 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:52.432758 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:52 GMT
I0830 20:25:52.432986 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:52.927683 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
I0830 20:25:52.927708 241645 round_trippers.go:469] Request Headers:
I0830 20:25:52.927721 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:52.927727 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:52.930440 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:52.930468 241645 round_trippers.go:577] Response Headers:
I0830 20:25:52.930482 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:52.930491 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:52.930503 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:52.930519 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:52.930528 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:52 GMT
I0830 20:25:52.930540 241645 round_trippers.go:580] Audit-Id: db550cd4-45d2-42cd-afdf-84988dbaabdb
I0830 20:25:52.930668 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"439","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0830 20:25:52.931273 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:52.931291 241645 round_trippers.go:469] Request Headers:
I0830 20:25:52.931320 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:52.931338 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:52.933758 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:52.933775 241645 round_trippers.go:577] Response Headers:
I0830 20:25:52.933782 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:52 GMT
I0830 20:25:52.933790 241645 round_trippers.go:580] Audit-Id: 4435011a-c7da-4513-9c65-ce98e9701347
I0830 20:25:52.933798 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:52.933806 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:52.933818 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:52.933828 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:52.934216 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:53.427702 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
I0830 20:25:53.427726 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.427734 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.427740 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.430538 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:53.430558 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.430566 241645 round_trippers.go:580] Audit-Id: ee96fe6e-d14f-4968-8d4e-9615df62c767
I0830 20:25:53.430572 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.430577 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.430583 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.430603 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.430614 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.430828 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
I0830 20:25:53.431371 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:53.431385 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.431394 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.431400 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.433284 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:25:53.433299 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.433305 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.433312 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.433320 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.433330 241645 round_trippers.go:580] Audit-Id: 4baf1276-1dc1-4a26-9607-4d7b8235d7c6
I0830 20:25:53.433343 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.433351 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.433553 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:53.433911 241645 pod_ready.go:92] pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace has status "Ready":"True"
I0830 20:25:53.433928 241645 pod_ready.go:81] duration metric: took 2.018737231s waiting for pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.433941 241645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.434004 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-944570
I0830 20:25:53.434013 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.434024 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.434038 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.435720 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:25:53.435740 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.435750 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.435758 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.435769 241645 round_trippers.go:580] Audit-Id: 63760209-2707-45ea-ac73-2e8482dfde07
I0830 20:25:53.435777 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.435786 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.435794 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.435912 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-944570","namespace":"kube-system","uid":"8a7e3daf-bab9-401d-9448-0dd7a1710cc9","resourceVersion":"424","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.254:2379","kubernetes.io/config.hash":"fb846e75466869998dbb9a265eafadb1","kubernetes.io/config.mirror":"fb846e75466869998dbb9a265eafadb1","kubernetes.io/config.seen":"2023-08-30T20:25:25.839839858Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
I0830 20:25:53.436374 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:53.436390 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.436401 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.436418 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.438201 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:25:53.438220 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.438227 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.438233 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.438239 241645 round_trippers.go:580] Audit-Id: 178cde66-8334-4942-b2e7-c1e2fd6c2850
I0830 20:25:53.438248 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.438260 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.438268 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.438434 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:53.438740 241645 pod_ready.go:92] pod "etcd-multinode-944570" in "kube-system" namespace has status "Ready":"True"
I0830 20:25:53.438754 241645 pod_ready.go:81] duration metric: took 4.805533ms waiting for pod "etcd-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.438767 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.438834 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-944570
I0830 20:25:53.438844 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.438852 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.438864 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.440664 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:25:53.440677 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.440686 241645 round_trippers.go:580] Audit-Id: b7e154a0-ffca-4afc-b69f-b08201308e2c
I0830 20:25:53.440692 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.440697 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.440706 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.440723 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.440733 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.440870 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-944570","namespace":"kube-system","uid":"396cdb5a-0161-4c66-8588-6c1c62cae7be","resourceVersion":"425","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.254:8443","kubernetes.io/config.hash":"5c113dc76381297356051f3bc6bc6fd1","kubernetes.io/config.mirror":"5c113dc76381297356051f3bc6bc6fd1","kubernetes.io/config.seen":"2023-08-30T20:25:25.839841108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
I0830 20:25:53.441246 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:53.441258 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.441265 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.441272 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.442994 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:25:53.443005 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.443011 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.443016 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.443021 241645 round_trippers.go:580] Audit-Id: 3e8feefa-96d4-45fc-bc26-dc479e95efc1
I0830 20:25:53.443027 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.443035 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.443050 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.443288 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:53.443534 241645 pod_ready.go:92] pod "kube-apiserver-multinode-944570" in "kube-system" namespace has status "Ready":"True"
I0830 20:25:53.443546 241645 pod_ready.go:81] duration metric: took 4.768245ms waiting for pod "kube-apiserver-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.443554 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.443605 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-944570
I0830 20:25:53.443612 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.443619 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.443625 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.445430 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:25:53.445442 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.445447 241645 round_trippers.go:580] Audit-Id: 074beef7-3b15-4d9b-ba05-713c29a5fcd7
I0830 20:25:53.445453 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.445459 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.445466 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.445477 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.445493 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.445633 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-944570","namespace":"kube-system","uid":"6666fc21-62a9-4141-bb88-71bd4fe72b40","resourceVersion":"421","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ed3bbefd4c2f35595e2c0897a29a0a1c","kubernetes.io/config.mirror":"ed3bbefd4c2f35595e2c0897a29a0a1c","kubernetes.io/config.seen":"2023-08-30T20:25:25.839841993Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
I0830 20:25:53.446053 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:53.446067 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.446076 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.446082 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.448153 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:53.448167 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.448173 241645 round_trippers.go:580] Audit-Id: 1777e997-989d-4172-bc3d-dd380b42e61b
I0830 20:25:53.448178 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.448183 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.448188 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.448199 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.448210 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.448331 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:53.448644 241645 pod_ready.go:92] pod "kube-controller-manager-multinode-944570" in "kube-system" namespace has status "Ready":"True"
I0830 20:25:53.448660 241645 pod_ready.go:81] duration metric: took 5.097503ms waiting for pod "kube-controller-manager-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.448672 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqnp2" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.602018 241645 request.go:629] Waited for 153.258705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqnp2
I0830 20:25:53.602085 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqnp2
I0830 20:25:53.602089 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.602097 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.602104 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.604877 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:53.604903 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.604913 241645 round_trippers.go:580] Audit-Id: 06bcaea7-4871-4beb-ae16-50497caeae81
I0830 20:25:53.604919 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.604924 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.604930 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.604935 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.604940 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.605174 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nqnp2","generateName":"kube-proxy-","namespace":"kube-system","uid":"fc7f17e0-b6ac-48c3-b449-e4eb3325505c","resourceVersion":"408","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"77539e61-eb1a-4d08-91c1-22ad50311843","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77539e61-eb1a-4d08-91c1-22ad50311843\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0830 20:25:53.803020 241645 request.go:629] Waited for 197.404051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:53.803099 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:53.803104 241645 round_trippers.go:469] Request Headers:
I0830 20:25:53.803114 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:53.803124 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:53.806197 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:25:53.806217 241645 round_trippers.go:577] Response Headers:
I0830 20:25:53.806225 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:53.806230 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:53.806235 241645 round_trippers.go:580] Audit-Id: 9bd2bc2e-9df3-41ab-a330-44333d74012c
I0830 20:25:53.806241 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:53.806246 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:53.806251 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:53.806482 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:53.806824 241645 pod_ready.go:92] pod "kube-proxy-nqnp2" in "kube-system" namespace has status "Ready":"True"
I0830 20:25:53.806839 241645 pod_ready.go:81] duration metric: took 358.15537ms waiting for pod "kube-proxy-nqnp2" in "kube-system" namespace to be "Ready" ...
I0830 20:25:53.806848 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:25:54.002316 241645 request.go:629] Waited for 195.376223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-944570
I0830 20:25:54.002377 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-944570
I0830 20:25:54.002382 241645 round_trippers.go:469] Request Headers:
I0830 20:25:54.002390 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:54.002397 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:54.005347 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:54.005367 241645 round_trippers.go:577] Response Headers:
I0830 20:25:54.005380 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:53 GMT
I0830 20:25:54.005395 241645 round_trippers.go:580] Audit-Id: 72efda1c-496f-4b60-a340-d541d4d7d460
I0830 20:25:54.005406 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:54.005415 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:54.005426 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:54.005433 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:54.005537 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-944570","namespace":"kube-system","uid":"c2c628f7-bc4f-4f01-b67d-e105c72b8275","resourceVersion":"422","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"21d92ce9120286f1f3c68c1f19570340","kubernetes.io/config.mirror":"21d92ce9120286f1f3c68c1f19570340","kubernetes.io/config.seen":"2023-08-30T20:25:25.839835923Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
I0830 20:25:54.202399 241645 request.go:629] Waited for 196.421645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:54.202474 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:25:54.202482 241645 round_trippers.go:469] Request Headers:
I0830 20:25:54.202494 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:54.202504 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:54.206670 241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0830 20:25:54.206691 241645 round_trippers.go:577] Response Headers:
I0830 20:25:54.206698 241645 round_trippers.go:580] Audit-Id: d91318e8-8e2b-40d3-9054-c77c030fab26
I0830 20:25:54.206704 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:54.206718 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:54.206739 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:54.206752 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:54.206760 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:54 GMT
I0830 20:25:54.207442 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0830 20:25:54.207829 241645 pod_ready.go:92] pod "kube-scheduler-multinode-944570" in "kube-system" namespace has status "Ready":"True"
I0830 20:25:54.207847 241645 pod_ready.go:81] duration metric: took 400.992914ms waiting for pod "kube-scheduler-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:25:54.207861 241645 pod_ready.go:38] duration metric: took 2.802537009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0830 20:25:54.207889 241645 api_server.go:52] waiting for apiserver process to appear ...
I0830 20:25:54.207951 241645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0830 20:25:54.230824 241645 command_runner.go:130] > 1820
I0830 20:25:54.230859 241645 api_server.go:72] duration metric: took 15.085488796s to wait for apiserver process to appear ...
I0830 20:25:54.230867 241645 api_server.go:88] waiting for apiserver healthz status ...
I0830 20:25:54.230884 241645 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
I0830 20:25:54.236429 241645 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
ok
I0830 20:25:54.236488 241645 round_trippers.go:463] GET https://192.168.39.254:8443/version
I0830 20:25:54.236495 241645 round_trippers.go:469] Request Headers:
I0830 20:25:54.236503 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:54.236512 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:54.237485 241645 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0830 20:25:54.237500 241645 round_trippers.go:577] Response Headers:
I0830 20:25:54.237509 241645 round_trippers.go:580] Content-Length: 263
I0830 20:25:54.237517 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:54 GMT
I0830 20:25:54.237526 241645 round_trippers.go:580] Audit-Id: 062201ad-365d-44be-ac37-e52b08304abc
I0830 20:25:54.237541 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:54.237549 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:54.237561 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:54.237570 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:54.237593 241645 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.1",
"gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
"gitTreeState": "clean",
"buildDate": "2023-08-24T11:16:30Z",
"goVersion": "go1.20.7",
"compiler": "gc",
"platform": "linux/amd64"
}
I0830 20:25:54.237691 241645 api_server.go:141] control plane version: v1.28.1
I0830 20:25:54.237706 241645 api_server.go:131] duration metric: took 6.83424ms to wait for apiserver health ...
I0830 20:25:54.237713 241645 system_pods.go:43] waiting for kube-system pods to appear ...
I0830 20:25:54.402053 241645 request.go:629] Waited for 164.268495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
I0830 20:25:54.402139 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
I0830 20:25:54.402144 241645 round_trippers.go:469] Request Headers:
I0830 20:25:54.402152 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:54.402159 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:54.405537 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:25:54.405565 241645 round_trippers.go:577] Response Headers:
I0830 20:25:54.405576 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:54.405584 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:54.405592 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:54.405601 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:54 GMT
I0830 20:25:54.405609 241645 round_trippers.go:580] Audit-Id: 585a03e6-be8d-4612-a2ee-0f655d6fa953
I0830 20:25:54.405617 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:54.406496 241645 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54129 chars]
I0830 20:25:54.408224 241645 system_pods.go:59] 8 kube-system pods found
I0830 20:25:54.408252 241645 system_pods.go:61] "coredns-5dd5756b68-lzj6n" [19a6c9fa-86e0-4e7f-a62b-28ee984bdd45] Running
I0830 20:25:54.408260 241645 system_pods.go:61] "etcd-multinode-944570" [8a7e3daf-bab9-401d-9448-0dd7a1710cc9] Running
I0830 20:25:54.408266 241645 system_pods.go:61] "kindnet-mm2wq" [59593f9a-5462-4392-8bdc-a8150d335166] Running
I0830 20:25:54.408273 241645 system_pods.go:61] "kube-apiserver-multinode-944570" [396cdb5a-0161-4c66-8588-6c1c62cae7be] Running
I0830 20:25:54.408280 241645 system_pods.go:61] "kube-controller-manager-multinode-944570" [6666fc21-62a9-4141-bb88-71bd4fe72b40] Running
I0830 20:25:54.408287 241645 system_pods.go:61] "kube-proxy-nqnp2" [fc7f17e0-b6ac-48c3-b449-e4eb3325505c] Running
I0830 20:25:54.408294 241645 system_pods.go:61] "kube-scheduler-multinode-944570" [c2c628f7-bc4f-4f01-b67d-e105c72b8275] Running
I0830 20:25:54.408304 241645 system_pods.go:61] "storage-provisioner" [4e79c194-f047-45a2-9ed4-ffafbe983cda] Running
I0830 20:25:54.408311 241645 system_pods.go:74] duration metric: took 170.591918ms to wait for pod list to return data ...
I0830 20:25:54.408321 241645 default_sa.go:34] waiting for default service account to be created ...
I0830 20:25:54.602843 241645 request.go:629] Waited for 194.410178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/default/serviceaccounts
I0830 20:25:54.602920 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/default/serviceaccounts
I0830 20:25:54.602933 241645 round_trippers.go:469] Request Headers:
I0830 20:25:54.602945 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:54.602956 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:54.605658 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:54.605691 241645 round_trippers.go:577] Response Headers:
I0830 20:25:54.605702 241645 round_trippers.go:580] Audit-Id: c71947ca-1ac3-4883-ac79-cfee40bbb882
I0830 20:25:54.605709 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:54.605717 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:54.605726 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:54.605739 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:54.605748 241645 round_trippers.go:580] Content-Length: 261
I0830 20:25:54.605761 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:54 GMT
I0830 20:25:54.605789 241645 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a3dad9c1-08ae-4f4f-834f-75347ebf1272","resourceVersion":"344","creationTimestamp":"2023-08-30T20:25:38Z"}}]}
I0830 20:25:54.605996 241645 default_sa.go:45] found service account: "default"
I0830 20:25:54.606014 241645 default_sa.go:55] duration metric: took 197.685249ms for default service account to be created ...
I0830 20:25:54.606025 241645 system_pods.go:116] waiting for k8s-apps to be running ...
I0830 20:25:54.802521 241645 request.go:629] Waited for 196.399828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
I0830 20:25:54.802586 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
I0830 20:25:54.802597 241645 round_trippers.go:469] Request Headers:
I0830 20:25:54.802608 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:54.802619 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:54.806528 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:25:54.806551 241645 round_trippers.go:577] Response Headers:
I0830 20:25:54.806561 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:54.806570 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:54.806577 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:54.806585 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:54.806593 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:54 GMT
I0830 20:25:54.806602 241645 round_trippers.go:580] Audit-Id: 0a6c2ba5-2180-48a8-9064-77b4d96ce009
I0830 20:25:54.807627 241645 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54129 chars]
I0830 20:25:54.809307 241645 system_pods.go:86] 8 kube-system pods found
I0830 20:25:54.809329 241645 system_pods.go:89] "coredns-5dd5756b68-lzj6n" [19a6c9fa-86e0-4e7f-a62b-28ee984bdd45] Running
I0830 20:25:54.809338 241645 system_pods.go:89] "etcd-multinode-944570" [8a7e3daf-bab9-401d-9448-0dd7a1710cc9] Running
I0830 20:25:54.809344 241645 system_pods.go:89] "kindnet-mm2wq" [59593f9a-5462-4392-8bdc-a8150d335166] Running
I0830 20:25:54.809350 241645 system_pods.go:89] "kube-apiserver-multinode-944570" [396cdb5a-0161-4c66-8588-6c1c62cae7be] Running
I0830 20:25:54.809358 241645 system_pods.go:89] "kube-controller-manager-multinode-944570" [6666fc21-62a9-4141-bb88-71bd4fe72b40] Running
I0830 20:25:54.809365 241645 system_pods.go:89] "kube-proxy-nqnp2" [fc7f17e0-b6ac-48c3-b449-e4eb3325505c] Running
I0830 20:25:54.809375 241645 system_pods.go:89] "kube-scheduler-multinode-944570" [c2c628f7-bc4f-4f01-b67d-e105c72b8275] Running
I0830 20:25:54.809382 241645 system_pods.go:89] "storage-provisioner" [4e79c194-f047-45a2-9ed4-ffafbe983cda] Running
I0830 20:25:54.809392 241645 system_pods.go:126] duration metric: took 203.361169ms to wait for k8s-apps to be running ...
I0830 20:25:54.809405 241645 system_svc.go:44] waiting for kubelet service to be running ....
I0830 20:25:54.809457 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0830 20:25:54.822614 241645 system_svc.go:56] duration metric: took 13.200237ms WaitForService to wait for kubelet.
I0830 20:25:54.822640 241645 kubeadm.go:581] duration metric: took 15.677269744s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0830 20:25:54.822661 241645 node_conditions.go:102] verifying NodePressure condition ...
I0830 20:25:55.002030 241645 request.go:629] Waited for 179.292452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes
I0830 20:25:55.002106 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes
I0830 20:25:55.002113 241645 round_trippers.go:469] Request Headers:
I0830 20:25:55.002125 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:25:55.002152 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:25:55.004786 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:25:55.004822 241645 round_trippers.go:577] Response Headers:
I0830 20:25:55.004837 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:25:55.004846 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:25:55.004855 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:25:55.004864 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:25:55.004872 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:25:54 GMT
I0830 20:25:55.004881 241645 round_trippers.go:580] Audit-Id: 66f3267a-20be-49a8-a57b-143a9b2c40a1
I0830 20:25:55.005016 241645 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"433","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
I0830 20:25:55.005476 241645 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0830 20:25:55.005504 241645 node_conditions.go:123] node cpu capacity is 2
I0830 20:25:55.005522 241645 node_conditions.go:105] duration metric: took 182.855296ms to run NodePressure ...
I0830 20:25:55.005536 241645 start.go:228] waiting for startup goroutines ...
I0830 20:25:55.005545 241645 start.go:233] waiting for cluster config update ...
I0830 20:25:55.005558 241645 start.go:242] writing updated cluster config ...
I0830 20:25:55.008221 241645 out.go:177]
I0830 20:25:55.009869 241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:25:55.009950 241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:25:55.011573 241645 out.go:177] * Starting worker node multinode-944570-m02 in cluster multinode-944570
I0830 20:25:55.012857 241645 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
I0830 20:25:55.012880 241645 cache.go:57] Caching tarball of preloaded images
I0830 20:25:55.012989 241645 preload.go:174] Found /home/jenkins/minikube-integration/17145-222139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0830 20:25:55.013000 241645 cache.go:60] Finished verifying existence of preloaded tar for v1.28.1 on docker
I0830 20:25:55.013075 241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:25:55.013227 241645 start.go:365] acquiring machines lock for multinode-944570-m02: {Name:mk9a092bb7d2f42c1b785aa1d546d37ad26cec77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0830 20:25:55.013267 241645 start.go:369] acquired machines lock for "multinode-944570-m02" in 22.672µs
I0830 20:25:55.013285 241645 start.go:93] Provisioning new machine with config: &{Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0830 20:25:55.013352 241645 start.go:125] createHost starting for "m02" (driver="kvm2")
I0830 20:25:55.015104 241645 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0830 20:25:55.015219 241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:25:55.015253 241645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:25:55.029775 241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39841
I0830 20:25:55.030254 241645 main.go:141] libmachine: () Calling .GetVersion
I0830 20:25:55.030745 241645 main.go:141] libmachine: Using API Version 1
I0830 20:25:55.030765 241645 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:25:55.031060 241645 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:25:55.031328 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetMachineName
I0830 20:25:55.031480 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:25:55.031657 241645 start.go:159] libmachine.API.Create for "multinode-944570" (driver="kvm2")
I0830 20:25:55.031705 241645 client.go:168] LocalClient.Create starting
I0830 20:25:55.031741 241645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem
I0830 20:25:55.031777 241645 main.go:141] libmachine: Decoding PEM data...
I0830 20:25:55.031801 241645 main.go:141] libmachine: Parsing certificate...
I0830 20:25:55.031867 241645 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem
I0830 20:25:55.031894 241645 main.go:141] libmachine: Decoding PEM data...
I0830 20:25:55.031913 241645 main.go:141] libmachine: Parsing certificate...
I0830 20:25:55.031937 241645 main.go:141] libmachine: Running pre-create checks...
I0830 20:25:55.031950 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .PreCreateCheck
I0830 20:25:55.032124 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetConfigRaw
I0830 20:25:55.032554 241645 main.go:141] libmachine: Creating machine...
I0830 20:25:55.032574 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .Create
I0830 20:25:55.032724 241645 main.go:141] libmachine: (multinode-944570-m02) Creating KVM machine...
I0830 20:25:55.033893 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found existing default KVM network
I0830 20:25:55.033990 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found existing private KVM network mk-multinode-944570
I0830 20:25:55.034092 241645 main.go:141] libmachine: (multinode-944570-m02) Setting up store path in /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02 ...
I0830 20:25:55.034118 241645 main.go:141] libmachine: (multinode-944570-m02) Building disk image from file:///home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
I0830 20:25:55.034181 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:55.034075 242034 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17145-222139/.minikube
I0830 20:25:55.034270 241645 main.go:141] libmachine: (multinode-944570-m02) Downloading /home/jenkins/minikube-integration/17145-222139/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17145-222139/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
I0830 20:25:55.259494 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:55.259349 242034 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa...
I0830 20:25:55.370819 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:55.370677 242034 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/multinode-944570-m02.rawdisk...
I0830 20:25:55.370851 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Writing magic tar header
I0830 20:25:55.370864 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Writing SSH key tar header
I0830 20:25:55.370942 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:55.370865 242034 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02 ...
I0830 20:25:55.371056 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02
I0830 20:25:55.371105 241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02 (perms=drwx------)
I0830 20:25:55.371127 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube/machines
I0830 20:25:55.371155 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139/.minikube
I0830 20:25:55.371174 241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube/machines (perms=drwxr-xr-x)
I0830 20:25:55.371189 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17145-222139
I0830 20:25:55.371204 241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139/.minikube (perms=drwxr-xr-x)
I0830 20:25:55.371219 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0830 20:25:55.371233 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home/jenkins
I0830 20:25:55.371247 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Checking permissions on dir: /home
I0830 20:25:55.371261 241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration/17145-222139 (perms=drwxrwxr-x)
I0830 20:25:55.371279 241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0830 20:25:55.371301 241645 main.go:141] libmachine: (multinode-944570-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0830 20:25:55.371312 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Skipping /home - not owner
I0830 20:25:55.371331 241645 main.go:141] libmachine: (multinode-944570-m02) Creating domain...
I0830 20:25:55.372512 241645 main.go:141] libmachine: (multinode-944570-m02) define libvirt domain using xml:
I0830 20:25:55.372526 241645 main.go:141] libmachine: (multinode-944570-m02) <domain type='kvm'>
I0830 20:25:55.372539 241645 main.go:141] libmachine: (multinode-944570-m02) <name>multinode-944570-m02</name>
I0830 20:25:55.372563 241645 main.go:141] libmachine: (multinode-944570-m02) <memory unit='MiB'>2200</memory>
I0830 20:25:55.372577 241645 main.go:141] libmachine: (multinode-944570-m02) <vcpu>2</vcpu>
I0830 20:25:55.372588 241645 main.go:141] libmachine: (multinode-944570-m02) <features>
I0830 20:25:55.372597 241645 main.go:141] libmachine: (multinode-944570-m02) <acpi/>
I0830 20:25:55.372604 241645 main.go:141] libmachine: (multinode-944570-m02) <apic/>
I0830 20:25:55.372611 241645 main.go:141] libmachine: (multinode-944570-m02) <pae/>
I0830 20:25:55.372619 241645 main.go:141] libmachine: (multinode-944570-m02)
I0830 20:25:55.372642 241645 main.go:141] libmachine: (multinode-944570-m02) </features>
I0830 20:25:55.372655 241645 main.go:141] libmachine: (multinode-944570-m02) <cpu mode='host-passthrough'>
I0830 20:25:55.372670 241645 main.go:141] libmachine: (multinode-944570-m02)
I0830 20:25:55.372686 241645 main.go:141] libmachine: (multinode-944570-m02) </cpu>
I0830 20:25:55.372696 241645 main.go:141] libmachine: (multinode-944570-m02) <os>
I0830 20:25:55.372704 241645 main.go:141] libmachine: (multinode-944570-m02) <type>hvm</type>
I0830 20:25:55.372712 241645 main.go:141] libmachine: (multinode-944570-m02) <boot dev='cdrom'/>
I0830 20:25:55.372719 241645 main.go:141] libmachine: (multinode-944570-m02) <boot dev='hd'/>
I0830 20:25:55.372726 241645 main.go:141] libmachine: (multinode-944570-m02) <bootmenu enable='no'/>
I0830 20:25:55.372734 241645 main.go:141] libmachine: (multinode-944570-m02) </os>
I0830 20:25:55.372747 241645 main.go:141] libmachine: (multinode-944570-m02) <devices>
I0830 20:25:55.372760 241645 main.go:141] libmachine: (multinode-944570-m02) <disk type='file' device='cdrom'>
I0830 20:25:55.372796 241645 main.go:141] libmachine: (multinode-944570-m02) <source file='/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/boot2docker.iso'/>
I0830 20:25:55.372823 241645 main.go:141] libmachine: (multinode-944570-m02) <target dev='hdc' bus='scsi'/>
I0830 20:25:55.372834 241645 main.go:141] libmachine: (multinode-944570-m02) <readonly/>
I0830 20:25:55.372845 241645 main.go:141] libmachine: (multinode-944570-m02) </disk>
I0830 20:25:55.372855 241645 main.go:141] libmachine: (multinode-944570-m02) <disk type='file' device='disk'>
I0830 20:25:55.372868 241645 main.go:141] libmachine: (multinode-944570-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0830 20:25:55.372889 241645 main.go:141] libmachine: (multinode-944570-m02) <source file='/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/multinode-944570-m02.rawdisk'/>
I0830 20:25:55.372903 241645 main.go:141] libmachine: (multinode-944570-m02) <target dev='hda' bus='virtio'/>
I0830 20:25:55.372916 241645 main.go:141] libmachine: (multinode-944570-m02) </disk>
I0830 20:25:55.372928 241645 main.go:141] libmachine: (multinode-944570-m02) <interface type='network'>
I0830 20:25:55.372938 241645 main.go:141] libmachine: (multinode-944570-m02) <source network='mk-multinode-944570'/>
I0830 20:25:55.372945 241645 main.go:141] libmachine: (multinode-944570-m02) <model type='virtio'/>
I0830 20:25:55.372955 241645 main.go:141] libmachine: (multinode-944570-m02) </interface>
I0830 20:25:55.372969 241645 main.go:141] libmachine: (multinode-944570-m02) <interface type='network'>
I0830 20:25:55.372984 241645 main.go:141] libmachine: (multinode-944570-m02) <source network='default'/>
I0830 20:25:55.372996 241645 main.go:141] libmachine: (multinode-944570-m02) <model type='virtio'/>
I0830 20:25:55.373009 241645 main.go:141] libmachine: (multinode-944570-m02) </interface>
I0830 20:25:55.373021 241645 main.go:141] libmachine: (multinode-944570-m02) <serial type='pty'>
I0830 20:25:55.373031 241645 main.go:141] libmachine: (multinode-944570-m02) <target port='0'/>
I0830 20:25:55.373040 241645 main.go:141] libmachine: (multinode-944570-m02) </serial>
I0830 20:25:55.373054 241645 main.go:141] libmachine: (multinode-944570-m02) <console type='pty'>
I0830 20:25:55.373068 241645 main.go:141] libmachine: (multinode-944570-m02) <target type='serial' port='0'/>
I0830 20:25:55.373080 241645 main.go:141] libmachine: (multinode-944570-m02) </console>
I0830 20:25:55.373101 241645 main.go:141] libmachine: (multinode-944570-m02) <rng model='virtio'>
I0830 20:25:55.373118 241645 main.go:141] libmachine: (multinode-944570-m02) <backend model='random'>/dev/random</backend>
I0830 20:25:55.373131 241645 main.go:141] libmachine: (multinode-944570-m02) </rng>
I0830 20:25:55.373140 241645 main.go:141] libmachine: (multinode-944570-m02)
I0830 20:25:55.373154 241645 main.go:141] libmachine: (multinode-944570-m02)
I0830 20:25:55.373166 241645 main.go:141] libmachine: (multinode-944570-m02) </devices>
I0830 20:25:55.373179 241645 main.go:141] libmachine: (multinode-944570-m02) </domain>
I0830 20:25:55.373194 241645 main.go:141] libmachine: (multinode-944570-m02)
I0830 20:25:55.380007 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:20:a2:d3 in network default
I0830 20:25:55.380545 241645 main.go:141] libmachine: (multinode-944570-m02) Ensuring networks are active...
I0830 20:25:55.380571 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:25:55.381242 241645 main.go:141] libmachine: (multinode-944570-m02) Ensuring network default is active
I0830 20:25:55.381595 241645 main.go:141] libmachine: (multinode-944570-m02) Ensuring network mk-multinode-944570 is active
I0830 20:25:55.381918 241645 main.go:141] libmachine: (multinode-944570-m02) Getting domain xml...
I0830 20:25:55.382580 241645 main.go:141] libmachine: (multinode-944570-m02) Creating domain...
I0830 20:25:56.608809 241645 main.go:141] libmachine: (multinode-944570-m02) Waiting to get IP...
I0830 20:25:56.609657 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:25:56.610053 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:25:56.610090 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:56.610039 242034 retry.go:31] will retry after 302.606474ms: waiting for machine to come up
I0830 20:25:56.914633 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:25:56.915071 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:25:56.915100 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:56.915033 242034 retry.go:31] will retry after 375.67518ms: waiting for machine to come up
I0830 20:25:57.292648 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:25:57.293041 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:25:57.293075 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:57.292974 242034 retry.go:31] will retry after 350.879029ms: waiting for machine to come up
I0830 20:25:57.645554 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:25:57.646037 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:25:57.646067 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:57.645994 242034 retry.go:31] will retry after 460.417887ms: waiting for machine to come up
I0830 20:25:58.107592 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:25:58.108052 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:25:58.108084 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:58.107995 242034 retry.go:31] will retry after 642.731127ms: waiting for machine to come up
I0830 20:25:58.752095 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:25:58.752499 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:25:58.752535 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:58.752438 242034 retry.go:31] will retry after 724.563571ms: waiting for machine to come up
I0830 20:25:59.478464 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:25:59.478907 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:25:59.478938 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:25:59.478851 242034 retry.go:31] will retry after 715.405729ms: waiting for machine to come up
I0830 20:26:00.196342 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:00.196798 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:26:00.196822 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:00.196772 242034 retry.go:31] will retry after 1.251649903s: waiting for machine to come up
I0830 20:26:01.449666 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:01.450189 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:26:01.450213 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:01.450164 242034 retry.go:31] will retry after 1.20189777s: waiting for machine to come up
I0830 20:26:02.653445 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:02.653804 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:26:02.653832 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:02.653758 242034 retry.go:31] will retry after 1.604660089s: waiting for machine to come up
I0830 20:26:04.260497 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:04.260956 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:26:04.260989 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:04.260891 242034 retry.go:31] will retry after 2.060538508s: waiting for machine to come up
I0830 20:26:06.324713 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:06.325118 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:26:06.325162 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:06.325055 242034 retry.go:31] will retry after 2.818222039s: waiting for machine to come up
I0830 20:26:09.147034 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:09.147441 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:26:09.147465 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:09.147406 242034 retry.go:31] will retry after 2.829546399s: waiting for machine to come up
I0830 20:26:11.979378 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:11.979741 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find current IP address of domain multinode-944570-m02 in network mk-multinode-944570
I0830 20:26:11.979779 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | I0830 20:26:11.979698 242034 retry.go:31] will retry after 3.8123592s: waiting for machine to come up
I0830 20:26:15.794149 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:15.794665 241645 main.go:141] libmachine: (multinode-944570-m02) Found IP for machine: 192.168.39.87
I0830 20:26:15.794700 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has current primary IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:15.794712 241645 main.go:141] libmachine: (multinode-944570-m02) Reserving static IP address...
I0830 20:26:15.795045 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | unable to find host DHCP lease matching {name: "multinode-944570-m02", mac: "52:54:00:c1:a1:9d", ip: "192.168.39.87"} in network mk-multinode-944570
I0830 20:26:15.870100 241645 main.go:141] libmachine: (multinode-944570-m02) Reserved static IP address: 192.168.39.87
I0830 20:26:15.870137 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Getting to WaitForSSH function...
I0830 20:26:15.870148 241645 main.go:141] libmachine: (multinode-944570-m02) Waiting for SSH to be available...
I0830 20:26:15.872535 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:15.872977 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:15.873014 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:15.873101 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Using SSH client type: external
I0830 20:26:15.873131 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa (-rw-------)
I0830 20:26:15.873205 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0830 20:26:15.873234 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | About to run SSH command:
I0830 20:26:15.873257 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | exit 0
I0830 20:26:15.966891 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | SSH cmd err, output: <nil>:
I0830 20:26:15.967220 241645 main.go:141] libmachine: (multinode-944570-m02) KVM machine creation complete!
I0830 20:26:15.967573 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetConfigRaw
I0830 20:26:15.968143 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:26:15.968350 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:26:15.968538 241645 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0830 20:26:15.968554 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetState
I0830 20:26:15.969933 241645 main.go:141] libmachine: Detecting operating system of created instance...
I0830 20:26:15.969950 241645 main.go:141] libmachine: Waiting for SSH to be available...
I0830 20:26:15.969960 241645 main.go:141] libmachine: Getting to WaitForSSH function...
I0830 20:26:15.969971 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:15.972134 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:15.972480 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:15.972514 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:15.972728 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:15.972929 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:15.973110 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:15.973264 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:15.973435 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:26:15.974096 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.87 22 <nil> <nil>}
I0830 20:26:15.974115 241645 main.go:141] libmachine: About to run SSH command:
exit 0
I0830 20:26:16.098476 241645 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0830 20:26:16.098501 241645 main.go:141] libmachine: Detecting the provisioner...
I0830 20:26:16.098510 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:16.101490 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.101868 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:16.101898 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.102042 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:16.102237 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.102423 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.102563 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:16.102702 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:26:16.103095 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.87 22 <nil> <nil>}
I0830 20:26:16.103112 241645 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0830 20:26:16.223742 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-g88b5c50-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0830 20:26:16.223862 241645 main.go:141] libmachine: found compatible host: buildroot
I0830 20:26:16.223880 241645 main.go:141] libmachine: Provisioning with buildroot...
I0830 20:26:16.223894 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetMachineName
I0830 20:26:16.224192 241645 buildroot.go:166] provisioning hostname "multinode-944570-m02"
I0830 20:26:16.224223 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetMachineName
I0830 20:26:16.224443 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:16.227187 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.227551 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:16.227600 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.227744 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:16.227921 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.228114 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.228285 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:16.228451 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:26:16.228836 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.87 22 <nil> <nil>}
I0830 20:26:16.228849 241645 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-944570-m02 && echo "multinode-944570-m02" | sudo tee /etc/hostname
I0830 20:26:16.363283 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-944570-m02
I0830 20:26:16.363331 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:16.366075 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.366444 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:16.366480 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.366617 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:16.366801 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.367014 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.367186 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:16.367365 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:26:16.367766 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.87 22 <nil> <nil>}
I0830 20:26:16.367782 241645 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-944570-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-944570-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-944570-m02' | sudo tee -a /etc/hosts;
fi
fi
I0830 20:26:16.493984 241645 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0830 20:26:16.494047 241645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17145-222139/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-222139/.minikube}
I0830 20:26:16.494074 241645 buildroot.go:174] setting up certificates
I0830 20:26:16.494088 241645 provision.go:83] configureAuth start
I0830 20:26:16.494106 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetMachineName
I0830 20:26:16.494400 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
I0830 20:26:16.497051 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.497396 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:16.497431 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.497609 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:16.499938 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.500246 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:16.500278 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.500408 241645 provision.go:138] copyHostCerts
I0830 20:26:16.500436 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
I0830 20:26:16.500464 241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem, removing ...
I0830 20:26:16.500473 241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem
I0830 20:26:16.500564 241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/ca.pem (1082 bytes)
I0830 20:26:16.500659 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
I0830 20:26:16.500682 241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem, removing ...
I0830 20:26:16.500691 241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem
I0830 20:26:16.500737 241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/cert.pem (1123 bytes)
I0830 20:26:16.500805 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
I0830 20:26:16.500825 241645 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem, removing ...
I0830 20:26:16.500832 241645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem
I0830 20:26:16.500865 241645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-222139/.minikube/key.pem (1675 bytes)
I0830 20:26:16.500929 241645 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem org=jenkins.multinode-944570-m02 san=[192.168.39.87 192.168.39.87 localhost 127.0.0.1 minikube multinode-944570-m02]
I0830 20:26:16.565338 241645 provision.go:172] copyRemoteCerts
I0830 20:26:16.565392 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0830 20:26:16.565419 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:16.568036 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.568397 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:16.568433 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.568582 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:16.568741 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.568851 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:16.569043 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
I0830 20:26:16.665811 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0830 20:26:16.665872 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0830 20:26:16.688096 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem -> /etc/docker/server.pem
I0830 20:26:16.688154 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0830 20:26:16.709910 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0830 20:26:16.709964 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0830 20:26:16.732379 241645 provision.go:86] duration metric: configureAuth took 238.276272ms
I0830 20:26:16.732406 241645 buildroot.go:189] setting minikube options for container-runtime
I0830 20:26:16.732589 241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:26:16.732614 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:26:16.732881 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:16.735477 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.735763 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:16.735793 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.736029 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:16.736219 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.736412 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.736567 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:16.736737 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:26:16.737237 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.87 22 <nil> <nil>}
I0830 20:26:16.737252 241645 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0830 20:26:16.861239 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0830 20:26:16.861264 241645 buildroot.go:70] root file system type: tmpfs
I0830 20:26:16.861378 241645 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0830 20:26:16.861395 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:16.863937 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.864240 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:16.864266 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:16.864478 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:16.864666 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.864846 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:16.864978 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:16.865142 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:26:16.865531 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.87 22 <nil> <nil>}
I0830 20:26:16.865587 241645 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.254"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0830 20:26:17.005103 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.254
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0830 20:26:17.005147 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:17.007937 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.008381 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:17.008415 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.008571 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:17.008765 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:17.008949 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:17.009134 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:17.009332 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:26:17.009924 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.87 22 <nil> <nil>}
I0830 20:26:17.009946 241645 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0830 20:26:17.764718 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0830 20:26:17.764756 241645 main.go:141] libmachine: Checking connection to Docker...
I0830 20:26:17.764771 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetURL
I0830 20:26:17.766130 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | Using libvirt version 6000000
I0830 20:26:17.768012 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.768389 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:17.768437 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.768601 241645 main.go:141] libmachine: Docker is up and running!
I0830 20:26:17.768619 241645 main.go:141] libmachine: Reticulating splines...
I0830 20:26:17.768626 241645 client.go:171] LocalClient.Create took 22.736910165s
I0830 20:26:17.768659 241645 start.go:167] duration metric: libmachine.API.Create for "multinode-944570" took 22.737003742s
I0830 20:26:17.768671 241645 start.go:300] post-start starting for "multinode-944570-m02" (driver="kvm2")
I0830 20:26:17.768683 241645 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0830 20:26:17.768704 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:26:17.768965 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0830 20:26:17.769001 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:17.771493 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.771869 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:17.771893 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.772060 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:17.772277 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:17.772460 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:17.772611 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
I0830 20:26:17.864068 241645 ssh_runner.go:195] Run: cat /etc/os-release
I0830 20:26:17.867815 241645 command_runner.go:130] > NAME=Buildroot
I0830 20:26:17.867843 241645 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
I0830 20:26:17.867849 241645 command_runner.go:130] > ID=buildroot
I0830 20:26:17.867858 241645 command_runner.go:130] > VERSION_ID=2021.02.12
I0830 20:26:17.867866 241645 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0830 20:26:17.867927 241645 info.go:137] Remote host: Buildroot 2021.02.12
I0830 20:26:17.867941 241645 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/addons for local assets ...
I0830 20:26:17.868012 241645 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-222139/.minikube/files for local assets ...
I0830 20:26:17.868090 241645 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> 2293472.pem in /etc/ssl/certs
I0830 20:26:17.868121 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> /etc/ssl/certs/2293472.pem
I0830 20:26:17.868234 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0830 20:26:17.876101 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /etc/ssl/certs/2293472.pem (1708 bytes)
I0830 20:26:17.899545 241645 start.go:303] post-start completed in 130.860082ms
I0830 20:26:17.899598 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetConfigRaw
I0830 20:26:17.900218 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
I0830 20:26:17.902905 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.903241 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:17.903271 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.903547 241645 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/config.json ...
I0830 20:26:17.903719 241645 start.go:128] duration metric: createHost completed in 22.89035769s
I0830 20:26:17.903746 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:17.905713 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.906013 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:17.906039 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:17.906169 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:17.906363 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:17.906528 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:17.906650 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:17.906822 241645 main.go:141] libmachine: Using SSH client type: native
I0830 20:26:17.907256 241645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.39.87 22 <nil> <nil>}
I0830 20:26:17.907270 241645 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0830 20:26:18.035722 241645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1693427178.008129069
I0830 20:26:18.035749 241645 fix.go:206] guest clock: 1693427178.008129069
I0830 20:26:18.035757 241645 fix.go:219] Guest: 2023-08-30 20:26:18.008129069 +0000 UTC Remote: 2023-08-30 20:26:17.903735593 +0000 UTC m=+99.699112165 (delta=104.393476ms)
I0830 20:26:18.035771 241645 fix.go:190] guest clock delta is within tolerance: 104.393476ms
I0830 20:26:18.035776 241645 start.go:83] releasing machines lock for "multinode-944570-m02", held for 23.02250006s
I0830 20:26:18.035794 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:26:18.036095 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
I0830 20:26:18.038762 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:18.039123 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:18.039159 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:18.041547 241645 out.go:177] * Found network options:
I0830 20:26:18.043026 241645 out.go:177] - NO_PROXY=192.168.39.254
W0830 20:26:18.044413 241645 proxy.go:119] fail to check proxy env: Error ip not in block
I0830 20:26:18.044459 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:26:18.045019 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:26:18.045186 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .DriverName
I0830 20:26:18.045276 241645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0830 20:26:18.045314 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
W0830 20:26:18.045393 241645 proxy.go:119] fail to check proxy env: Error ip not in block
I0830 20:26:18.045464 241645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0830 20:26:18.045479 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHHostname
I0830 20:26:18.048117 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:18.048173 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:18.048497 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:18.048518 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:18.048543 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:18.048557 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:18.048717 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:18.048852 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHPort
I0830 20:26:18.048923 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:18.049033 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHKeyPath
I0830 20:26:18.049133 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:18.049195 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetSSHUsername
I0830 20:26:18.049297 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
I0830 20:26:18.049441 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570-m02/id_rsa Username:docker}
I0830 20:26:18.167831 241645 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0830 20:26:18.168666 241645 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0830 20:26:18.168704 241645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0830 20:26:18.168758 241645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0830 20:26:18.182323 241645 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0830 20:26:18.182557 241645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0830 20:26:18.182578 241645 start.go:466] detecting cgroup driver to use...
I0830 20:26:18.182699 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:26:18.198089 241645 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0830 20:26:18.198494 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0830 20:26:18.207229 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0830 20:26:18.216468 241645 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0830 20:26:18.216541 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0830 20:26:18.225253 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:26:18.233992 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0830 20:26:18.242668 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0830 20:26:18.251139 241645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0830 20:26:18.260192 241645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0830 20:26:18.268775 241645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0830 20:26:18.276482 241645 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0830 20:26:18.276560 241645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0830 20:26:18.284162 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:26:18.382328 241645 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0830 20:26:18.398015 241645 start.go:466] detecting cgroup driver to use...
I0830 20:26:18.398117 241645 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0830 20:26:18.411951 241645 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0830 20:26:18.411969 241645 command_runner.go:130] > [Unit]
I0830 20:26:18.411975 241645 command_runner.go:130] > Description=Docker Application Container Engine
I0830 20:26:18.411981 241645 command_runner.go:130] > Documentation=https://docs.docker.com
I0830 20:26:18.411986 241645 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0830 20:26:18.411991 241645 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0830 20:26:18.411996 241645 command_runner.go:130] > StartLimitBurst=3
I0830 20:26:18.412000 241645 command_runner.go:130] > StartLimitIntervalSec=60
I0830 20:26:18.412005 241645 command_runner.go:130] > [Service]
I0830 20:26:18.412008 241645 command_runner.go:130] > Type=notify
I0830 20:26:18.412013 241645 command_runner.go:130] > Restart=on-failure
I0830 20:26:18.412025 241645 command_runner.go:130] > Environment=NO_PROXY=192.168.39.254
I0830 20:26:18.412035 241645 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0830 20:26:18.412049 241645 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0830 20:26:18.412064 241645 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0830 20:26:18.412075 241645 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0830 20:26:18.412083 241645 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0830 20:26:18.412090 241645 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0830 20:26:18.412098 241645 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0830 20:26:18.412107 241645 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0830 20:26:18.412117 241645 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0830 20:26:18.412121 241645 command_runner.go:130] > ExecStart=
I0830 20:26:18.412135 241645 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0830 20:26:18.412145 241645 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0830 20:26:18.412156 241645 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0830 20:26:18.412172 241645 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0830 20:26:18.412182 241645 command_runner.go:130] > LimitNOFILE=infinity
I0830 20:26:18.412188 241645 command_runner.go:130] > LimitNPROC=infinity
I0830 20:26:18.412200 241645 command_runner.go:130] > LimitCORE=infinity
I0830 20:26:18.412210 241645 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0830 20:26:18.412221 241645 command_runner.go:130] > # Only systemd 226 and above support this version.
I0830 20:26:18.412227 241645 command_runner.go:130] > TasksMax=infinity
I0830 20:26:18.412232 241645 command_runner.go:130] > TimeoutStartSec=0
I0830 20:26:18.412241 241645 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0830 20:26:18.412245 241645 command_runner.go:130] > Delegate=yes
I0830 20:26:18.412253 241645 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0830 20:26:18.412261 241645 command_runner.go:130] > KillMode=process
I0830 20:26:18.412267 241645 command_runner.go:130] > [Install]
I0830 20:26:18.412271 241645 command_runner.go:130] > WantedBy=multi-user.target
I0830 20:26:18.412327 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:26:18.424774 241645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0830 20:26:18.444173 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0830 20:26:18.457853 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:26:18.469785 241645 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0830 20:26:18.503424 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0830 20:26:18.516289 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0830 20:26:18.534273 241645 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0830 20:26:18.534348 241645 ssh_runner.go:195] Run: which cri-dockerd
I0830 20:26:18.537674 241645 command_runner.go:130] > /usr/bin/cri-dockerd
I0830 20:26:18.537957 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0830 20:26:18.547927 241645 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0830 20:26:18.563554 241645 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0830 20:26:18.668812 241645 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0830 20:26:18.771145 241645 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0830 20:26:18.771175 241645 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0830 20:26:18.786831 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:26:18.896129 241645 ssh_runner.go:195] Run: sudo systemctl restart docker
I0830 20:26:20.250484 241645 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.354319055s)
I0830 20:26:20.250549 241645 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:26:20.354304 241645 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0830 20:26:20.458215 241645 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0830 20:26:20.558505 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:26:20.656554 241645 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0830 20:26:20.670952 241645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0830 20:26:20.775121 241645 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0830 20:26:20.851467 241645 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0830 20:26:20.851558 241645 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0830 20:26:20.856769 241645 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0830 20:26:20.856792 241645 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0830 20:26:20.856798 241645 command_runner.go:130] > Device: 16h/22d Inode: 947 Links: 1
I0830 20:26:20.856805 241645 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0830 20:26:20.856815 241645 command_runner.go:130] > Access: 2023-08-30 20:26:20.762615498 +0000
I0830 20:26:20.856820 241645 command_runner.go:130] > Modify: 2023-08-30 20:26:20.762615498 +0000
I0830 20:26:20.856824 241645 command_runner.go:130] > Change: 2023-08-30 20:26:20.765619781 +0000
I0830 20:26:20.856828 241645 command_runner.go:130] > Birth: -
I0830 20:26:20.856846 241645 start.go:534] Will wait 60s for crictl version
I0830 20:26:20.856897 241645 ssh_runner.go:195] Run: which crictl
I0830 20:26:20.861731 241645 command_runner.go:130] > /usr/bin/crictl
I0830 20:26:20.862160 241645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0830 20:26:20.898717 241645 command_runner.go:130] > Version: 0.1.0
I0830 20:26:20.898750 241645 command_runner.go:130] > RuntimeName: docker
I0830 20:26:20.898758 241645 command_runner.go:130] > RuntimeVersion: 24.0.5
I0830 20:26:20.898767 241645 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0830 20:26:20.898792 241645 start.go:550] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.5
RuntimeApiVersion: v1alpha2
I0830 20:26:20.898853 241645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0830 20:26:20.926751 241645 command_runner.go:130] > 24.0.5
I0830 20:26:20.927803 241645 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0830 20:26:20.953389 241645 command_runner.go:130] > 24.0.5
I0830 20:26:20.957055 241645 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
I0830 20:26:20.958479 241645 out.go:177] - env NO_PROXY=192.168.39.254
I0830 20:26:20.959809 241645 main.go:141] libmachine: (multinode-944570-m02) Calling .GetIP
I0830 20:26:20.962454 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:20.962850 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:a1:9d", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:26:09 +0000 UTC Type:0 Mac:52:54:00:c1:a1:9d Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:multinode-944570-m02 Clientid:01:52:54:00:c1:a1:9d}
I0830 20:26:20.962894 241645 main.go:141] libmachine: (multinode-944570-m02) DBG | domain multinode-944570-m02 has defined IP address 192.168.39.87 and MAC address 52:54:00:c1:a1:9d in network mk-multinode-944570
I0830 20:26:20.963067 241645 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0830 20:26:20.966820 241645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0830 20:26:20.978285 241645 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570 for IP: 192.168.39.87
I0830 20:26:20.978318 241645 certs.go:190] acquiring lock for shared ca certs: {Name:mk1ac5fe312bfdaa0e7afaffac50c875afeaeaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0830 20:26:20.978453 241645 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key
I0830 20:26:20.978494 241645 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key
I0830 20:26:20.978507 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0830 20:26:20.978528 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0830 20:26:20.978544 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0830 20:26:20.978558 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0830 20:26:20.978625 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem (1338 bytes)
W0830 20:26:20.978663 241645 certs.go:433] ignoring /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347_empty.pem, impossibly tiny 0 bytes
I0830 20:26:20.978679 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca-key.pem (1679 bytes)
I0830 20:26:20.978716 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/ca.pem (1082 bytes)
I0830 20:26:20.978746 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/cert.pem (1123 bytes)
I0830 20:26:20.978779 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/home/jenkins/minikube-integration/17145-222139/.minikube/certs/key.pem (1675 bytes)
I0830 20:26:20.978830 241645 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem (1708 bytes)
I0830 20:26:20.978866 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0830 20:26:20.978885 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem -> /usr/share/ca-certificates/229347.pem
I0830 20:26:20.978901 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem -> /usr/share/ca-certificates/2293472.pem
I0830 20:26:20.979383 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0830 20:26:21.000348 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0830 20:26:21.020959 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0830 20:26:21.041729 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0830 20:26:21.063373 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0830 20:26:21.085100 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/certs/229347.pem --> /usr/share/ca-certificates/229347.pem (1338 bytes)
I0830 20:26:21.106314 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/files/etc/ssl/certs/2293472.pem --> /usr/share/ca-certificates/2293472.pem (1708 bytes)
I0830 20:26:21.126996 241645 ssh_runner.go:195] Run: openssl version
I0830 20:26:21.131711 241645 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0830 20:26:21.132008 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2293472.pem && ln -fs /usr/share/ca-certificates/2293472.pem /etc/ssl/certs/2293472.pem"
I0830 20:26:21.140915 241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2293472.pem
I0830 20:26:21.144851 241645 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 20:12 /usr/share/ca-certificates/2293472.pem
I0830 20:26:21.145021 241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 20:12 /usr/share/ca-certificates/2293472.pem
I0830 20:26:21.145070 241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2293472.pem
I0830 20:26:21.149881 241645 command_runner.go:130] > 3ec20f2e
I0830 20:26:21.149986 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2293472.pem /etc/ssl/certs/3ec20f2e.0"
I0830 20:26:21.158319 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0830 20:26:21.166602 241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0830 20:26:21.170509 241645 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 20:06 /usr/share/ca-certificates/minikubeCA.pem
I0830 20:26:21.170535 241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 20:06 /usr/share/ca-certificates/minikubeCA.pem
I0830 20:26:21.170571 241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0830 20:26:21.175250 241645 command_runner.go:130] > b5213941
I0830 20:26:21.175466 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0830 20:26:21.184136 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/229347.pem && ln -fs /usr/share/ca-certificates/229347.pem /etc/ssl/certs/229347.pem"
I0830 20:26:21.192400 241645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/229347.pem
I0830 20:26:21.196494 241645 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 20:12 /usr/share/ca-certificates/229347.pem
I0830 20:26:21.196518 241645 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 20:12 /usr/share/ca-certificates/229347.pem
I0830 20:26:21.196567 241645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/229347.pem
I0830 20:26:21.201292 241645 command_runner.go:130] > 51391683
I0830 20:26:21.201569 241645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/229347.pem /etc/ssl/certs/51391683.0"
I0830 20:26:21.209807 241645 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0830 20:26:21.213600 241645 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0830 20:26:21.213633 241645 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0830 20:26:21.213698 241645 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0830 20:26:21.236481 241645 command_runner.go:130] > cgroupfs
I0830 20:26:21.237455 241645 cni.go:84] Creating CNI manager for ""
I0830 20:26:21.237472 241645 cni.go:136] 2 nodes found, recommending kindnet
I0830 20:26:21.237485 241645 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0830 20:26:21.237507 241645 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-944570 NodeName:multinode-944570-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0830 20:26:21.237680 241645 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.87
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-944570-m02"
kubeletExtraArgs:
node-ip: 192.168.39.87
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.254"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0830 20:26:21.237759 241645 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-944570-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
[Install]
config:
{KubernetesVersion:v1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0830 20:26:21.237819 241645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
I0830 20:26:21.245688 241645 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
I0830 20:26:21.245725 241645 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.1: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
Initiating transfer...
I0830 20:26:21.246145 241645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.1
I0830 20:26:21.255257 241645 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl.sha256
I0830 20:26:21.255283 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubectl -> /var/lib/minikube/binaries/v1.28.1/kubectl
I0830 20:26:21.255368 241645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl
I0830 20:26:21.255385 241645 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubelet
I0830 20:26:21.255411 241645 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubeadm
I0830 20:26:21.260026 241645 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
I0830 20:26:21.260237 241645 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
I0830 20:26:21.260263 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubectl --> /var/lib/minikube/binaries/v1.28.1/kubectl (49864704 bytes)
I0830 20:26:23.412564 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0830 20:26:23.425559 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubelet -> /var/lib/minikube/binaries/v1.28.1/kubelet
I0830 20:26:23.425645 241645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet
I0830 20:26:23.429518 241645 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
I0830 20:26:23.429589 241645 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
I0830 20:26:23.429620 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubelet --> /var/lib/minikube/binaries/v1.28.1/kubelet (110764032 bytes)
I0830 20:26:46.030696 241645 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubeadm -> /var/lib/minikube/binaries/v1.28.1/kubeadm
I0830 20:26:46.030775 241645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm
I0830 20:26:46.035417 241645 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
I0830 20:26:46.035663 241645 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
I0830 20:26:46.035707 241645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-222139/.minikube/cache/linux/amd64/v1.28.1/kubeadm --> /var/lib/minikube/binaries/v1.28.1/kubeadm (50749440 bytes)
I0830 20:26:46.260730 241645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0830 20:26:46.268460 241645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
I0830 20:26:46.282326 241645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0830 20:26:46.296262 241645 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0830 20:26:46.299948 241645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0830 20:26:46.310073 241645 host.go:66] Checking if "multinode-944570" exists ...
I0830 20:26:46.310367 241645 config.go:182] Loaded profile config "multinode-944570": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 20:26:46.310509 241645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0830 20:26:46.310555 241645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0830 20:26:46.325256 241645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
I0830 20:26:46.325684 241645 main.go:141] libmachine: () Calling .GetVersion
I0830 20:26:46.326182 241645 main.go:141] libmachine: Using API Version 1
I0830 20:26:46.326204 241645 main.go:141] libmachine: () Calling .SetConfigRaw
I0830 20:26:46.326525 241645 main.go:141] libmachine: () Calling .GetMachineName
I0830 20:26:46.326698 241645 main.go:141] libmachine: (multinode-944570) Calling .DriverName
I0830 20:26:46.326854 241645 start.go:301] JoinCluster: &{Name:multinode-944570 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-944570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0830 20:26:46.326982 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
I0830 20:26:46.326998 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHHostname
I0830 20:26:46.329653 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:26:46.330088 241645 main.go:141] libmachine: (multinode-944570) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:42:84", ip: ""} in network mk-multinode-944570: {Iface:virbr1 ExpiryTime:2023-08-30 21:24:53 +0000 UTC Type:0 Mac:52:54:00:50:42:84 Iaid: IPaddr:192.168.39.254 Prefix:24 Hostname:multinode-944570 Clientid:01:52:54:00:50:42:84}
I0830 20:26:46.330111 241645 main.go:141] libmachine: (multinode-944570) DBG | domain multinode-944570 has defined IP address 192.168.39.254 and MAC address 52:54:00:50:42:84 in network mk-multinode-944570
I0830 20:26:46.330267 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHPort
I0830 20:26:46.330439 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHKeyPath
I0830 20:26:46.330612 241645 main.go:141] libmachine: (multinode-944570) Calling .GetSSHUsername
I0830 20:26:46.330753 241645 sshutil.go:53] new ssh client: &{IP:192.168.39.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17145-222139/.minikube/machines/multinode-944570/id_rsa Username:docker}
I0830 20:26:46.532641 241645 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token e0xxtf.j38sb4ogstadzdh0 --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6
I0830 20:26:46.535817 241645 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0830 20:26:46.535930 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e0xxtf.j38sb4ogstadzdh0 --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-944570-m02"
I0830 20:26:46.581983 241645 command_runner.go:130] ! W0830 20:26:46.573788 1161 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0830 20:26:46.718592 241645 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0830 20:26:49.398181 241645 command_runner.go:130] > [preflight] Running pre-flight checks
I0830 20:26:49.398208 241645 command_runner.go:130] > [preflight] Reading configuration from the cluster...
I0830 20:26:49.398222 241645 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0830 20:26:49.398232 241645 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0830 20:26:49.398243 241645 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0830 20:26:49.398250 241645 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0830 20:26:49.398260 241645 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0830 20:26:49.398268 241645 command_runner.go:130] > This node has joined the cluster:
I0830 20:26:49.398279 241645 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
I0830 20:26:49.398298 241645 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
I0830 20:26:49.398310 241645 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
I0830 20:26:49.398336 241645 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e0xxtf.j38sb4ogstadzdh0 --discovery-token-ca-cert-hash sha256:c7d83bf61acd2074da49416c1394017fb833fac06001902ce7698890024b9ad6 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-944570-m02": (2.862384316s)
I0830 20:26:49.398362 241645 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0830 20:26:49.650865 241645 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
I0830 20:26:49.650918 241645 start.go:303] JoinCluster complete in 3.324064662s
I0830 20:26:49.650935 241645 cni.go:84] Creating CNI manager for ""
I0830 20:26:49.650942 241645 cni.go:136] 2 nodes found, recommending kindnet
I0830 20:26:49.651007 241645 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0830 20:26:49.655973 241645 command_runner.go:130] > File: /opt/cni/bin/portmap
I0830 20:26:49.655992 241645 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I0830 20:26:49.655999 241645 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I0830 20:26:49.656005 241645 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0830 20:26:49.656010 241645 command_runner.go:130] > Access: 2023-08-30 20:24:50.661585107 +0000
I0830 20:26:49.656015 241645 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
I0830 20:26:49.656019 241645 command_runner.go:130] > Change: 2023-08-30 20:24:48.918585107 +0000
I0830 20:26:49.656023 241645 command_runner.go:130] > Birth: -
I0830 20:26:49.656323 241645 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
I0830 20:26:49.656346 241645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0830 20:26:49.672810 241645 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0830 20:26:49.998917 241645 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0830 20:26:50.000674 241645 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0830 20:26:50.002927 241645 command_runner.go:130] > serviceaccount/kindnet unchanged
I0830 20:26:50.015310 241645 command_runner.go:130] > daemonset.apps/kindnet configured
I0830 20:26:50.021109 241645 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17145-222139/kubeconfig
I0830 20:26:50.021511 241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0830 20:26:50.022036 241645 round_trippers.go:463] GET https://192.168.39.254:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0830 20:26:50.022050 241645 round_trippers.go:469] Request Headers:
I0830 20:26:50.022061 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:50.022071 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:50.024458 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:50.024481 241645 round_trippers.go:577] Response Headers:
I0830 20:26:50.024490 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:50.024499 241645 round_trippers.go:580] Content-Length: 291
I0830 20:26:50.024515 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:50 GMT
I0830 20:26:50.024524 241645 round_trippers.go:580] Audit-Id: f753ef77-215a-4bc7-8333-2baa348d3313
I0830 20:26:50.024533 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:50.024543 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:50.024555 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:50.024583 241645 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"c7c6dc1f-7aa3-4c43-b4bf-a6ffaa653be3","resourceVersion":"457","creationTimestamp":"2023-08-30T20:25:25Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0830 20:26:50.024675 241645 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-944570" context rescaled to 1 replicas
I0830 20:26:50.024702 241645 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.87 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:false Worker:true}
I0830 20:26:50.027480 241645 out.go:177] * Verifying Kubernetes components...
I0830 20:26:50.029053 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0830 20:26:50.042408 241645 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17145-222139/kubeconfig
I0830 20:26:50.042646 241645 kapi.go:59] client config for multinode-944570: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/profiles/multinode-944570/client.key", CAFile:"/home/jenkins/minikube-integration/17145-222139/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d63c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0830 20:26:50.042896 241645 node_ready.go:35] waiting up to 6m0s for node "multinode-944570-m02" to be "Ready" ...
I0830 20:26:50.042975 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:50.042991 241645 round_trippers.go:469] Request Headers:
I0830 20:26:50.043002 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:50.043010 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:50.047566 241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0830 20:26:50.047588 241645 round_trippers.go:577] Response Headers:
I0830 20:26:50.047598 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:50.047607 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:50.047615 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:50.047624 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:50.047637 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:50.047647 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:50 GMT
I0830 20:26:50.047658 241645 round_trippers.go:580] Audit-Id: f545e445-776e-4594-b0cb-1b5d714a44a6
I0830 20:26:50.047825 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:50.048142 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:50.048157 241645 round_trippers.go:469] Request Headers:
I0830 20:26:50.048166 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:50.048176 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:50.050951 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:50.050971 241645 round_trippers.go:577] Response Headers:
I0830 20:26:50.050980 241645 round_trippers.go:580] Audit-Id: ed642158-7514-4759-88be-d27c59dcf46d
I0830 20:26:50.050989 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:50.050997 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:50.051008 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:50.051023 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:50.051032 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:50.051044 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:50 GMT
I0830 20:26:50.051141 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:50.552170 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:50.552193 241645 round_trippers.go:469] Request Headers:
I0830 20:26:50.552202 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:50.552208 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:50.556706 241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0830 20:26:50.556723 241645 round_trippers.go:577] Response Headers:
I0830 20:26:50.556730 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:50.556738 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:50.556747 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:50.556755 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:50.556765 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:50.556775 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:50 GMT
I0830 20:26:50.556781 241645 round_trippers.go:580] Audit-Id: 4b584d9b-b0d6-40a6-89ca-9e81ff84fb23
I0830 20:26:50.557226 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:51.051915 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:51.051946 241645 round_trippers.go:469] Request Headers:
I0830 20:26:51.051956 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:51.051964 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:51.054545 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:51.054565 241645 round_trippers.go:577] Response Headers:
I0830 20:26:51.054574 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:51.054583 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:51.054591 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:51.054596 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:51.054602 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:51 GMT
I0830 20:26:51.054607 241645 round_trippers.go:580] Audit-Id: 38946057-f7d9-410f-8656-feba1211a3f3
I0830 20:26:51.054616 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:51.054704 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:51.551802 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:51.551830 241645 round_trippers.go:469] Request Headers:
I0830 20:26:51.551838 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:51.551845 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:51.556087 241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0830 20:26:51.556110 241645 round_trippers.go:577] Response Headers:
I0830 20:26:51.556118 241645 round_trippers.go:580] Audit-Id: d6ae2869-80ed-40a6-bcaa-d19b50a839f7
I0830 20:26:51.556123 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:51.556128 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:51.556133 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:51.556139 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:51.556144 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:51.556149 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:51 GMT
I0830 20:26:51.556322 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:52.052557 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:52.052581 241645 round_trippers.go:469] Request Headers:
I0830 20:26:52.052590 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:52.052599 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:52.056721 241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0830 20:26:52.056743 241645 round_trippers.go:577] Response Headers:
I0830 20:26:52.056750 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:52.056756 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:52 GMT
I0830 20:26:52.056761 241645 round_trippers.go:580] Audit-Id: 4f3fb346-f49f-4998-8168-9486d0c18545
I0830 20:26:52.056766 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:52.056772 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:52.056778 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:52.056787 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:52.057058 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:52.057325 241645 node_ready.go:58] node "multinode-944570-m02" has status "Ready":"False"
I0830 20:26:52.552631 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:52.552665 241645 round_trippers.go:469] Request Headers:
I0830 20:26:52.552677 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:52.552687 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:52.555837 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:26:52.555873 241645 round_trippers.go:577] Response Headers:
I0830 20:26:52.555885 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:52.555893 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:52.555901 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:52.555910 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:52.555919 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:52 GMT
I0830 20:26:52.555932 241645 round_trippers.go:580] Audit-Id: 13c81e07-ab24-44d5-ab02-c1a7e5a08e0a
I0830 20:26:52.555941 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:52.556055 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:53.052568 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:53.052591 241645 round_trippers.go:469] Request Headers:
I0830 20:26:53.052599 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:53.052606 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:53.055367 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:53.055394 241645 round_trippers.go:577] Response Headers:
I0830 20:26:53.055406 241645 round_trippers.go:580] Audit-Id: 0a5e66d8-3ae4-4c87-b380-a7fac3c498f9
I0830 20:26:53.055415 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:53.055423 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:53.055433 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:53.055445 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:53.055454 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:53.055463 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:53 GMT
I0830 20:26:53.055540 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:53.551917 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:53.551947 241645 round_trippers.go:469] Request Headers:
I0830 20:26:53.551960 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:53.551970 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:53.554596 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:53.554614 241645 round_trippers.go:577] Response Headers:
I0830 20:26:53.554621 241645 round_trippers.go:580] Content-Length: 3484
I0830 20:26:53.554627 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:53 GMT
I0830 20:26:53.554632 241645 round_trippers.go:580] Audit-Id: dc9cb34b-197c-4d5f-8b08-ca7aa76218a4
I0830 20:26:53.554642 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:53.554651 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:53.554660 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:53.554670 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:53.554754 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"528","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2460 chars]
I0830 20:26:54.052437 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:54.052460 241645 round_trippers.go:469] Request Headers:
I0830 20:26:54.052475 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:54.052481 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:54.055696 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:26:54.055718 241645 round_trippers.go:577] Response Headers:
I0830 20:26:54.055726 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:54.055732 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:54.055738 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:54.055743 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:54.055749 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:54.055754 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:54 GMT
I0830 20:26:54.055761 241645 round_trippers.go:580] Audit-Id: 0bbdc749-b427-48d9-b91c-a2f80d9d6182
I0830 20:26:54.055855 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:54.551952 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:54.551977 241645 round_trippers.go:469] Request Headers:
I0830 20:26:54.551985 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:54.551991 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:54.555422 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:26:54.555440 241645 round_trippers.go:577] Response Headers:
I0830 20:26:54.555447 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:54 GMT
I0830 20:26:54.555454 241645 round_trippers.go:580] Audit-Id: 18e4ae8e-28d1-49d2-a122-d99ce6a50b26
I0830 20:26:54.555468 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:54.555479 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:54.555490 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:54.555502 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:54.555510 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:54.555601 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:54.555925 241645 node_ready.go:58] node "multinode-944570-m02" has status "Ready":"False"
I0830 20:26:55.051601 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:55.051626 241645 round_trippers.go:469] Request Headers:
I0830 20:26:55.051640 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:55.051650 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:55.054516 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:55.054537 241645 round_trippers.go:577] Response Headers:
I0830 20:26:55.054549 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:55.054556 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:55.054565 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:55.054575 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:55 GMT
I0830 20:26:55.054586 241645 round_trippers.go:580] Audit-Id: 974f116d-0216-4762-9183-1015e51e5e3d
I0830 20:26:55.054596 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:55.054606 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:55.054713 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:55.551935 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:55.551961 241645 round_trippers.go:469] Request Headers:
I0830 20:26:55.551974 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:55.551982 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:55.554727 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:55.554755 241645 round_trippers.go:577] Response Headers:
I0830 20:26:55.554766 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:55.554775 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:55.554783 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:55 GMT
I0830 20:26:55.554791 241645 round_trippers.go:580] Audit-Id: 5105a81f-7ebf-4f95-94e4-d96ad38601ce
I0830 20:26:55.554800 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:55.554810 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:55.554820 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:55.554913 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:56.052573 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:56.052604 241645 round_trippers.go:469] Request Headers:
I0830 20:26:56.052625 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:56.052634 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:56.055454 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:56.055487 241645 round_trippers.go:577] Response Headers:
I0830 20:26:56.055500 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:56.055509 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:56.055523 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:56.055532 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:56 GMT
I0830 20:26:56.055545 241645 round_trippers.go:580] Audit-Id: 2ebe5a02-ac79-4615-92d3-0611eda8efb2
I0830 20:26:56.055556 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:56.055567 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:56.055629 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:56.551867 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:56.551894 241645 round_trippers.go:469] Request Headers:
I0830 20:26:56.551905 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:56.551913 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:56.554437 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:56.554469 241645 round_trippers.go:577] Response Headers:
I0830 20:26:56.554482 241645 round_trippers.go:580] Audit-Id: 32c6b61d-1015-42d4-9f3c-250479cf0464
I0830 20:26:56.554491 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:56.554504 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:56.554514 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:56.554523 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:56.554533 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:56.554542 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:56 GMT
I0830 20:26:56.554659 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:57.052026 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:57.052052 241645 round_trippers.go:469] Request Headers:
I0830 20:26:57.052061 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:57.052067 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:57.055201 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:26:57.055229 241645 round_trippers.go:577] Response Headers:
I0830 20:26:57.055239 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:57.055248 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:57 GMT
I0830 20:26:57.055257 241645 round_trippers.go:580] Audit-Id: 8a7fd742-f25f-40ea-b9ac-46606095e66c
I0830 20:26:57.055266 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:57.055275 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:57.055286 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:57.055311 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:57.055408 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:57.055739 241645 node_ready.go:58] node "multinode-944570-m02" has status "Ready":"False"
I0830 20:26:57.552646 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:57.552676 241645 round_trippers.go:469] Request Headers:
I0830 20:26:57.552692 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:57.552698 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:57.556398 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:26:57.556469 241645 round_trippers.go:577] Response Headers:
I0830 20:26:57.556487 241645 round_trippers.go:580] Audit-Id: 7bee2d41-58d7-4568-9fca-95149c42f714
I0830 20:26:57.556495 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:57.556501 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:57.556507 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:57.556513 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:57.556528 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:57.556534 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:57 GMT
I0830 20:26:57.556634 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:58.052279 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:58.052302 241645 round_trippers.go:469] Request Headers:
I0830 20:26:58.052314 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:58.052324 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:58.055751 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:26:58.055776 241645 round_trippers.go:577] Response Headers:
I0830 20:26:58.055787 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:58 GMT
I0830 20:26:58.055795 241645 round_trippers.go:580] Audit-Id: 444a9a46-a9ac-4d72-bae2-63959a4414b3
I0830 20:26:58.055806 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:58.055815 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:58.055824 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:58.055833 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:58.055845 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:58.055931 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:58.552342 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:58.552412 241645 round_trippers.go:469] Request Headers:
I0830 20:26:58.552425 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:58.552432 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:58.555516 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:26:58.555537 241645 round_trippers.go:577] Response Headers:
I0830 20:26:58.555547 241645 round_trippers.go:580] Audit-Id: a39077d2-2a79-48f2-a333-2af8a33e50fd
I0830 20:26:58.555555 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:58.555566 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:58.555575 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:58.555585 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:58.555595 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:58.555612 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:58 GMT
I0830 20:26:58.555759 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:59.052129 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:59.052152 241645 round_trippers.go:469] Request Headers:
I0830 20:26:59.052161 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:59.052166 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:59.054698 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:59.054721 241645 round_trippers.go:577] Response Headers:
I0830 20:26:59.054729 241645 round_trippers.go:580] Audit-Id: 272dca67-50b6-458e-b85c-d6ac249276b6
I0830 20:26:59.054735 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:59.054741 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:59.054746 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:59.054752 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:59.054757 241645 round_trippers.go:580] Content-Length: 3593
I0830 20:26:59.054763 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:59 GMT
I0830 20:26:59.054813 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"536","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2569 chars]
I0830 20:26:59.552600 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:26:59.552636 241645 round_trippers.go:469] Request Headers:
I0830 20:26:59.552650 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:26:59.552662 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:26:59.555615 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:26:59.555647 241645 round_trippers.go:577] Response Headers:
I0830 20:26:59.555661 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:26:59.555671 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:26:59.555680 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:26:59.555689 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:26:59.555702 241645 round_trippers.go:580] Content-Length: 3862
I0830 20:26:59.555708 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:26:59 GMT
I0830 20:26:59.555717 241645 round_trippers.go:580] Audit-Id: a29ce3e5-7430-4301-97b1-d11619163f9d
I0830 20:26:59.555821 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"553","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2838 chars]
I0830 20:26:59.556146 241645 node_ready.go:58] node "multinode-944570-m02" has status "Ready":"False"
I0830 20:27:00.051872 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:27:00.051892 241645 round_trippers.go:469] Request Headers:
I0830 20:27:00.051901 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:00.051908 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:00.054896 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:00.054915 241645 round_trippers.go:577] Response Headers:
I0830 20:27:00.054922 241645 round_trippers.go:580] Content-Length: 3862
I0830 20:27:00.054928 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:00 GMT
I0830 20:27:00.054934 241645 round_trippers.go:580] Audit-Id: ebf3b28e-9db0-432a-a4c0-e9dc1c97c040
I0830 20:27:00.054943 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:00.054957 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:00.054965 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:00.054977 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:00.055103 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"553","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2838 chars]
I0830 20:27:00.551695 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:27:00.551718 241645 round_trippers.go:469] Request Headers:
I0830 20:27:00.551727 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:00.551733 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:00.555088 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:27:00.555106 241645 round_trippers.go:577] Response Headers:
I0830 20:27:00.555113 241645 round_trippers.go:580] Audit-Id: 9b35cbd3-7b6d-4037-847d-0d49cb9ab635
I0830 20:27:00.555118 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:00.555130 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:00.555148 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:00.555158 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:00.555167 241645 round_trippers.go:580] Content-Length: 3862
I0830 20:27:00.555173 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:00 GMT
I0830 20:27:00.555257 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"553","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2838 chars]
I0830 20:27:01.051835 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:27:01.051881 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.051895 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.051906 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.054939 241645 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0830 20:27:01.054959 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.054967 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.054973 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.054978 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.054984 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.054989 241645 round_trippers.go:580] Content-Length: 3728
I0830 20:27:01.055001 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.055008 241645 round_trippers.go:580] Audit-Id: d5959320-74aa-479c-a0bf-f110e46db008
I0830 20:27:01.055095 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"559","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2704 chars]
I0830 20:27:01.055359 241645 node_ready.go:49] node "multinode-944570-m02" has status "Ready":"True"
I0830 20:27:01.055379 241645 node_ready.go:38] duration metric: took 11.012463872s waiting for node "multinode-944570-m02" to be "Ready" ...
I0830 20:27:01.055389 241645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0830 20:27:01.055454 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods
I0830 20:27:01.055462 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.055469 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.055476 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.065361 241645 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0830 20:27:01.065390 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.065401 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.065409 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.065415 241645 round_trippers.go:580] Audit-Id: 7c208406-7adc-4ffe-bf53-b35a15638314
I0830 20:27:01.065420 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.065425 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.065430 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.067625 241645 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67516 chars]
I0830 20:27:01.069692 241645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.069774 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lzj6n
I0830 20:27:01.069786 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.069798 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.069805 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.072491 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:01.072515 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.072525 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.072534 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.072542 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.072550 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.072562 241645 round_trippers.go:580] Audit-Id: 96928a7b-c510-49da-9fb6-48c5efc4a787
I0830 20:27:01.072571 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.072680 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lzj6n","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45","resourceVersion":"453","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"45f04028-4f16-400d-9690-5524a005f3c9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"45f04028-4f16-400d-9690-5524a005f3c9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
I0830 20:27:01.073238 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:27:01.073256 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.073266 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.073280 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.075216 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:27:01.075229 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.075236 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.075242 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.075251 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.075259 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.075271 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.075279 241645 round_trippers.go:580] Audit-Id: b9fb716b-863d-4d9c-8209-eec48de62088
I0830 20:27:01.075628 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0830 20:27:01.075991 241645 pod_ready.go:92] pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace has status "Ready":"True"
I0830 20:27:01.076008 241645 pod_ready.go:81] duration metric: took 6.295046ms waiting for pod "coredns-5dd5756b68-lzj6n" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.076016 241645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.076065 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-944570
I0830 20:27:01.076072 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.076079 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.076086 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.077742 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:27:01.077760 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.077777 241645 round_trippers.go:580] Audit-Id: a23ad99d-a893-43d3-a065-54aaa94e08bb
I0830 20:27:01.077787 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.077800 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.077808 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.077821 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.077833 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.077932 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-944570","namespace":"kube-system","uid":"8a7e3daf-bab9-401d-9448-0dd7a1710cc9","resourceVersion":"424","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.254:2379","kubernetes.io/config.hash":"fb846e75466869998dbb9a265eafadb1","kubernetes.io/config.mirror":"fb846e75466869998dbb9a265eafadb1","kubernetes.io/config.seen":"2023-08-30T20:25:25.839839858Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
I0830 20:27:01.078378 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:27:01.078395 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.078404 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.078417 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.079894 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:27:01.079907 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.079913 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.079918 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.079924 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.079932 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.079944 241645 round_trippers.go:580] Audit-Id: 01a147fc-973a-4f1c-b4e8-886ef6e6d0e5
I0830 20:27:01.079955 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.080099 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0830 20:27:01.080333 241645 pod_ready.go:92] pod "etcd-multinode-944570" in "kube-system" namespace has status "Ready":"True"
I0830 20:27:01.080344 241645 pod_ready.go:81] duration metric: took 4.323512ms waiting for pod "etcd-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.080357 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.080398 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-944570
I0830 20:27:01.080405 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.080412 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.080417 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.082010 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:27:01.082027 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.082033 241645 round_trippers.go:580] Audit-Id: 1c7dcfc6-70d9-4ffa-82f1-e4b4190b5ff7
I0830 20:27:01.082038 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.082043 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.082050 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.082056 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.082062 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.082219 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-944570","namespace":"kube-system","uid":"396cdb5a-0161-4c66-8588-6c1c62cae7be","resourceVersion":"425","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.254:8443","kubernetes.io/config.hash":"5c113dc76381297356051f3bc6bc6fd1","kubernetes.io/config.mirror":"5c113dc76381297356051f3bc6bc6fd1","kubernetes.io/config.seen":"2023-08-30T20:25:25.839841108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
I0830 20:27:01.082529 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:27:01.082540 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.082547 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.082552 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.084675 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:01.084698 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.084704 241645 round_trippers.go:580] Audit-Id: ab0a710d-cae2-488e-9653-c41dc1031fa0
I0830 20:27:01.084709 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.084715 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.084720 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.084725 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.084730 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.084823 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0830 20:27:01.085053 241645 pod_ready.go:92] pod "kube-apiserver-multinode-944570" in "kube-system" namespace has status "Ready":"True"
I0830 20:27:01.085063 241645 pod_ready.go:81] duration metric: took 4.701298ms waiting for pod "kube-apiserver-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.085071 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.085110 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-944570
I0830 20:27:01.085118 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.085124 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.085131 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.086886 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:27:01.086901 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.086906 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.086912 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.086917 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.086926 241645 round_trippers.go:580] Audit-Id: 899dd9a5-0b3e-4c18-8465-ee73198a8bdc
I0830 20:27:01.086933 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.086946 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.087093 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-944570","namespace":"kube-system","uid":"6666fc21-62a9-4141-bb88-71bd4fe72b40","resourceVersion":"421","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ed3bbefd4c2f35595e2c0897a29a0a1c","kubernetes.io/config.mirror":"ed3bbefd4c2f35595e2c0897a29a0a1c","kubernetes.io/config.seen":"2023-08-30T20:25:25.839841993Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
I0830 20:27:01.087425 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:27:01.087437 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.087444 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.087450 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.089061 241645 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0830 20:27:01.089078 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.089085 241645 round_trippers.go:580] Audit-Id: 73586935-479b-4fa5-a5e9-fc87c7811a4d
I0830 20:27:01.089090 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.089096 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.089104 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.089109 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.089118 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.089201 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0830 20:27:01.089419 241645 pod_ready.go:92] pod "kube-controller-manager-multinode-944570" in "kube-system" namespace has status "Ready":"True"
I0830 20:27:01.089431 241645 pod_ready.go:81] duration metric: took 4.354056ms waiting for pod "kube-controller-manager-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.089439 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hrz7d" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.252930 241645 request.go:629] Waited for 163.400815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrz7d
I0830 20:27:01.253002 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hrz7d
I0830 20:27:01.253007 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.253016 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.253023 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.255571 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:01.255599 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.255614 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.255628 241645 round_trippers.go:580] Audit-Id: d480c133-5b3d-48f3-ab73-b24b84173c92
I0830 20:27:01.255641 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.255651 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.255658 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.255681 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.255819 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hrz7d","generateName":"kube-proxy-","namespace":"kube-system","uid":"eb29e83b-aacd-4b74-b7f5-7f96252efba6","resourceVersion":"544","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"77539e61-eb1a-4d08-91c1-22ad50311843","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77539e61-eb1a-4d08-91c1-22ad50311843\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
I0830 20:27:01.452643 241645 request.go:629] Waited for 196.355886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:27:01.452732 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570-m02
I0830 20:27:01.452739 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.452746 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.452755 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.455646 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:01.455672 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.455679 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.455684 241645 round_trippers.go:580] Audit-Id: 64483f25-d1e9-4144-8f5f-8cecaa9f922b
I0830 20:27:01.455690 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.455700 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.455705 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.455711 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.455720 241645 round_trippers.go:580] Content-Length: 3728
I0830 20:27:01.455808 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570-m02","uid":"877d6c33-8fe2-4387-b833-27d6785aebbb","resourceVersion":"559","creationTimestamp":"2023-08-30T20:26:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:26:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2704 chars]
I0830 20:27:01.456050 241645 pod_ready.go:92] pod "kube-proxy-hrz7d" in "kube-system" namespace has status "Ready":"True"
I0830 20:27:01.456062 241645 pod_ready.go:81] duration metric: took 366.618979ms waiting for pod "kube-proxy-hrz7d" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.456071 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nqnp2" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.652572 241645 request.go:629] Waited for 196.401598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqnp2
I0830 20:27:01.652635 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nqnp2
I0830 20:27:01.652640 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.652647 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.652657 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.656709 241645 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0830 20:27:01.656730 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.656737 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.656743 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.656752 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.656766 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.656774 241645 round_trippers.go:580] Audit-Id: 93f5a92c-5ec8-4de0-91cb-305e7cc512ca
I0830 20:27:01.656781 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.656913 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nqnp2","generateName":"kube-proxy-","namespace":"kube-system","uid":"fc7f17e0-b6ac-48c3-b449-e4eb3325505c","resourceVersion":"408","creationTimestamp":"2023-08-30T20:25:38Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"77539e61-eb1a-4d08-91c1-22ad50311843","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77539e61-eb1a-4d08-91c1-22ad50311843\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
I0830 20:27:01.852744 241645 request.go:629] Waited for 195.40388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:27:01.852821 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:27:01.852826 241645 round_trippers.go:469] Request Headers:
I0830 20:27:01.852834 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:01.852843 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:01.855391 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:01.855413 241645 round_trippers.go:577] Response Headers:
I0830 20:27:01.855420 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:01 GMT
I0830 20:27:01.855427 241645 round_trippers.go:580] Audit-Id: 190e8688-0718-4b50-8e07-8cab55d8cfa2
I0830 20:27:01.855432 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:01.855437 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:01.855444 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:01.855450 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:01.855818 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0830 20:27:01.856146 241645 pod_ready.go:92] pod "kube-proxy-nqnp2" in "kube-system" namespace has status "Ready":"True"
I0830 20:27:01.856161 241645 pod_ready.go:81] duration metric: took 400.084355ms waiting for pod "kube-proxy-nqnp2" in "kube-system" namespace to be "Ready" ...
I0830 20:27:01.856178 241645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:27:02.052484 241645 request.go:629] Waited for 196.215764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-944570
I0830 20:27:02.052569 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-944570
I0830 20:27:02.052576 241645 round_trippers.go:469] Request Headers:
I0830 20:27:02.052584 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:02.052593 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:02.055473 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:02.055495 241645 round_trippers.go:577] Response Headers:
I0830 20:27:02.055505 241645 round_trippers.go:580] Audit-Id: 87f8eb24-a1b1-4989-8317-e50415cc134a
I0830 20:27:02.055524 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:02.055533 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:02.055541 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:02.055552 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:02.055557 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:02 GMT
I0830 20:27:02.055781 241645 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-944570","namespace":"kube-system","uid":"c2c628f7-bc4f-4f01-b67d-e105c72b8275","resourceVersion":"422","creationTimestamp":"2023-08-30T20:25:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"21d92ce9120286f1f3c68c1f19570340","kubernetes.io/config.mirror":"21d92ce9120286f1f3c68c1f19570340","kubernetes.io/config.seen":"2023-08-30T20:25:25.839835923Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T20:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
I0830 20:27:02.252545 241645 request.go:629] Waited for 196.379889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:27:02.252626 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes/multinode-944570
I0830 20:27:02.252631 241645 round_trippers.go:469] Request Headers:
I0830 20:27:02.252639 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:02.252645 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:02.255037 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:02.255058 241645 round_trippers.go:577] Response Headers:
I0830 20:27:02.255065 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:02.255071 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:02.255076 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:02.255081 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:02 GMT
I0830 20:27:02.255087 241645 round_trippers.go:580] Audit-Id: 64ab7396-2c86-4e21-9b70-9dac81c2d387
I0830 20:27:02.255092 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:02.255325 241645 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-08-30T20:25:22Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0830 20:27:02.255637 241645 pod_ready.go:92] pod "kube-scheduler-multinode-944570" in "kube-system" namespace has status "Ready":"True"
I0830 20:27:02.255652 241645 pod_ready.go:81] duration metric: took 399.46153ms waiting for pod "kube-scheduler-multinode-944570" in "kube-system" namespace to be "Ready" ...
I0830 20:27:02.255661 241645 pod_ready.go:38] duration metric: took 1.200257143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0830 20:27:02.255683 241645 system_svc.go:44] waiting for kubelet service to be running ....
I0830 20:27:02.255732 241645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0830 20:27:02.268139 241645 system_svc.go:56] duration metric: took 12.447616ms WaitForService to wait for kubelet.
I0830 20:27:02.268168 241645 kubeadm.go:581] duration metric: took 12.243439268s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0830 20:27:02.268193 241645 node_conditions.go:102] verifying NodePressure condition ...
I0830 20:27:02.452461 241645 request.go:629] Waited for 184.179451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.254:8443/api/v1/nodes
I0830 20:27:02.452529 241645 round_trippers.go:463] GET https://192.168.39.254:8443/api/v1/nodes
I0830 20:27:02.452533 241645 round_trippers.go:469] Request Headers:
I0830 20:27:02.452541 241645 round_trippers.go:473] Accept: application/json, */*
I0830 20:27:02.452548 241645 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0830 20:27:02.455243 241645 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0830 20:27:02.455261 241645 round_trippers.go:577] Response Headers:
I0830 20:27:02.455269 241645 round_trippers.go:580] Audit-Id: f38bc00a-92e4-47fa-bad4-b656b9097cf9
I0830 20:27:02.455275 241645 round_trippers.go:580] Cache-Control: no-cache, private
I0830 20:27:02.455281 241645 round_trippers.go:580] Content-Type: application/json
I0830 20:27:02.455286 241645 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 91b2a2ff-36c8-430a-b11a-5c70f0be3287
I0830 20:27:02.455306 241645 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 2765fca9-c5e0-4404-bc0a-ed64570af2e9
I0830 20:27:02.455317 241645 round_trippers.go:580] Date: Wed, 30 Aug 2023 20:27:02 GMT
I0830 20:27:02.455515 241645 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"multinode-944570","uid":"b7dc0c37-78e1-4c13-aa53-c07250416b63","resourceVersion":"462","creationTimestamp":"2023-08-30T20:25:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-944570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-944570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T20_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 8708 chars]
I0830 20:27:02.456094 241645 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0830 20:27:02.456112 241645 node_conditions.go:123] node cpu capacity is 2
I0830 20:27:02.456123 241645 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0830 20:27:02.456130 241645 node_conditions.go:123] node cpu capacity is 2
I0830 20:27:02.456143 241645 node_conditions.go:105] duration metric: took 187.937298ms to run NodePressure ...
I0830 20:27:02.456156 241645 start.go:228] waiting for startup goroutines ...
I0830 20:27:02.456187 241645 start.go:242] writing updated cluster config ...
I0830 20:27:02.456584 241645 ssh_runner.go:195] Run: rm -f paused
I0830 20:27:02.505879 241645 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
I0830 20:27:02.508797 241645 out.go:177] * Done! kubectl is now configured to use "multinode-944570" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Wed 2023-08-30 20:24:49 UTC, ends at Wed 2023-08-30 20:28:25 UTC. --
Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.482193656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.483519014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.483631848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.483662132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 30 20:25:51 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:51.483676215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:25:51 multinode-944570 cri-dockerd[1011]: time="2023-08-30T20:25:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1710b141f702688f2ac6c1123dd35b15c5c3dcf83e6a5b1ea4bbe967a5b28b11/resolv.conf as [nameserver 192.168.122.1]"
Aug 30 20:25:52 multinode-944570 cri-dockerd[1011]: time="2023-08-30T20:25:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/206abe062563a58c0cfef43fd491b8a3cae33b87e0cc0fced346e41ef4ec84e9/resolv.conf as [nameserver 192.168.122.1]"
Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.103856996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.103900355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.103914789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.103927998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.128211475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.129968934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.130290431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 30 20:25:52 multinode-944570 dockerd[1122]: time="2023-08-30T20:25:52.130363924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:27:03 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:03.653262628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 30 20:27:03 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:03.653322242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:27:03 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:03.653347046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 30 20:27:03 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:03.653360640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:27:04 multinode-944570 cri-dockerd[1011]: time="2023-08-30T20:27:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9a370fce241da53845a9c9e91de36ae198942c881204465bc22a4c8c1b27b095/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Aug 30 20:27:06 multinode-944570 cri-dockerd[1011]: time="2023-08-30T20:27:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
Aug 30 20:27:06 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:06.203490979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Aug 30 20:27:06 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:06.203696862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Aug 30 20:27:06 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:06.203716638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Aug 30 20:27:06 multinode-944570 dockerd[1122]: time="2023-08-30T20:27:06.203729107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
e1b3528e7e0a9 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 About a minute ago Running busybox 0 9a370fce241da
cd5d628cc7d23 6e38f40d628db 2 minutes ago Running storage-provisioner 0 206abe062563a
b8869105783a6 ead0a4a53df89 2 minutes ago Running coredns 0 1710b141f7026
750968b9a2208 kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974 2 minutes ago Running kindnet-cni 0 2ed885733ebb0
9331896493b39 6cdbabde3874e 2 minutes ago Running kube-proxy 0 d815034079fed
b1920bbf2f90a 821b3dfea27be 3 minutes ago Running kube-controller-manager 0 6031b9bfee95a
25034328bbdc8 b462ce0c8b1ff 3 minutes ago Running kube-scheduler 0 2d451861388c9
adc09d4d4deb2 5c801295c21d0 3 minutes ago Running kube-apiserver 0 34fdd725e5e61
2825b7061ea0c 73deb9a3f7025 3 minutes ago Running etcd 0 185a0d6cacc72
*
* ==> coredns [b8869105783a] <==
* [INFO] 10.244.0.3:53917 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093956s
[INFO] 10.244.1.2:37188 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204711s
[INFO] 10.244.1.2:60110 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001691057s
[INFO] 10.244.1.2:33919 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152258s
[INFO] 10.244.1.2:44704 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010678s
[INFO] 10.244.1.2:37501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001396637s
[INFO] 10.244.1.2:58260 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121533s
[INFO] 10.244.1.2:38025 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098957s
[INFO] 10.244.1.2:44825 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081106s
[INFO] 10.244.0.3:41959 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099305s
[INFO] 10.244.0.3:57839 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006831s
[INFO] 10.244.0.3:34586 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072659s
[INFO] 10.244.0.3:40296 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054142s
[INFO] 10.244.1.2:50191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016399s
[INFO] 10.244.1.2:59772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160355s
[INFO] 10.244.1.2:33407 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000120004s
[INFO] 10.244.1.2:41985 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135351s
[INFO] 10.244.0.3:55767 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153943s
[INFO] 10.244.0.3:52348 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222259s
[INFO] 10.244.0.3:54368 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177557s
[INFO] 10.244.0.3:59329 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000714776s
[INFO] 10.244.1.2:46152 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232532s
[INFO] 10.244.1.2:60653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000151329s
[INFO] 10.244.1.2:52486 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177787s
[INFO] 10.244.1.2:39308 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092672s
*
* ==> describe nodes <==
* Name: multinode-944570
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-944570
kubernetes.io/os=linux
minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588
minikube.k8s.io/name=multinode-944570
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_08_30T20_25_27_0700
minikube.k8s.io/version=v1.31.2
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 30 Aug 2023 20:25:22 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-944570
AcquireTime: <unset>
RenewTime: Wed, 30 Aug 2023 20:28:19 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 30 Aug 2023 20:27:28 +0000 Wed, 30 Aug 2023 20:25:21 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 30 Aug 2023 20:27:28 +0000 Wed, 30 Aug 2023 20:25:21 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 30 Aug 2023 20:27:28 +0000 Wed, 30 Aug 2023 20:25:21 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 30 Aug 2023 20:27:28 +0000 Wed, 30 Aug 2023 20:25:50 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.254
Hostname: multinode-944570
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 476d2a07b648465491fd90796577f2f4
System UUID: 476d2a07-b648-4654-91fd-90796577f2f4
Boot ID: 384d102f-72a3-4c8d-a8c3-b3c37e330022
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.5
Kubelet Version: v1.28.1
Kube-Proxy Version: v1.28.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-fhrtd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 83s
kube-system coredns-5dd5756b68-lzj6n 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 2m48s
kube-system etcd-multinode-944570 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 3m
kube-system kindnet-mm2wq 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 2m48s
kube-system kube-apiserver-multinode-944570 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m
kube-system kube-controller-manager-multinode-944570 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m
kube-system kube-proxy-nqnp2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m48s
kube-system kube-scheduler-multinode-944570 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m46s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m46s kube-proxy
Normal Starting 3m8s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m8s (x8 over 3m8s) kubelet Node multinode-944570 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m8s (x8 over 3m8s) kubelet Node multinode-944570 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m8s (x7 over 3m8s) kubelet Node multinode-944570 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m8s kubelet Updated Node Allocatable limit across pods
Normal Starting 3m1s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m kubelet Node multinode-944570 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m kubelet Node multinode-944570 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m kubelet Node multinode-944570 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m48s node-controller Node multinode-944570 event: Registered Node multinode-944570 in Controller
Normal NodeReady 2m36s kubelet Node multinode-944570 status is now: NodeReady
Name: multinode-944570-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-944570-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 30 Aug 2023 20:26:49 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-944570-m02
AcquireTime: <unset>
RenewTime: Wed, 30 Aug 2023 20:28:21 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 30 Aug 2023 20:27:19 +0000 Wed, 30 Aug 2023 20:26:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 30 Aug 2023 20:27:19 +0000 Wed, 30 Aug 2023 20:26:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 30 Aug 2023 20:27:19 +0000 Wed, 30 Aug 2023 20:26:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 30 Aug 2023 20:27:19 +0000 Wed, 30 Aug 2023 20:27:00 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.87
Hostname: multinode-944570-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: c216fdfaab6546fbad7f82c635ecd591
System UUID: c216fdfa-ab65-46fb-ad7f-82c635ecd591
Boot ID: 07082b47-b067-4d8d-bf7e-80a61581e642
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.5
Kubelet Version: v1.28.1
Kube-Proxy Version: v1.28.1
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-n5m7r 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 83s
kube-system kindnet-z8vqm 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 97s
kube-system kube-proxy-hrz7d 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 97s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 91s kube-proxy
Normal NodeHasSufficientMemory 97s (x5 over 99s) kubelet Node multinode-944570-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 97s (x5 over 99s) kubelet Node multinode-944570-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 97s (x5 over 99s) kubelet Node multinode-944570-m02 status is now: NodeHasSufficientPID
Normal RegisteredNode 93s node-controller Node multinode-944570-m02 event: Registered Node multinode-944570-m02 in Controller
Normal NodeReady 86s kubelet Node multinode-944570-m02 status is now: NodeReady
Name: multinode-944570-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-944570-m03
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 30 Aug 2023 20:27:39 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-944570-m03
AcquireTime: <unset>
RenewTime: Wed, 30 Aug 2023 20:28:00 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 30 Aug 2023 20:27:52 +0000 Wed, 30 Aug 2023 20:27:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 30 Aug 2023 20:27:52 +0000 Wed, 30 Aug 2023 20:27:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 30 Aug 2023 20:27:52 +0000 Wed, 30 Aug 2023 20:27:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 30 Aug 2023 20:27:52 +0000 Wed, 30 Aug 2023 20:27:52 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.83
Hostname: multinode-944570-m03
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 53de3e7dea67440ab78c23344d9deeb7
System UUID: 53de3e7d-ea67-440a-b78c-23344d9deeb7
Boot ID: 0f18c261-95b4-4797-a8c1-c19423e85cae
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.5
Kubelet Version: v1.28.1
Kube-Proxy Version: v1.28.1
PodCIDR: 10.244.2.0/24
PodCIDRs: 10.244.2.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-fdzvb 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 46s
kube-system kube-proxy-6d9l8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 40s kube-proxy
Normal Starting 47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 47s (x2 over 47s) kubelet Node multinode-944570-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 47s (x2 over 47s) kubelet Node multinode-944570-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 47s (x2 over 47s) kubelet Node multinode-944570-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 46s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 43s node-controller Node multinode-944570-m03 event: Registered Node multinode-944570-m03 in Controller
Normal NodeReady 34s kubelet Node multinode-944570-m03 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.065621] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.196746] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.631564] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.136063] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +5.007151] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[Aug30 20:25] systemd-fstab-generator[548]: Ignoring "noauto" for root device
[ +0.101057] systemd-fstab-generator[559]: Ignoring "noauto" for root device
[ +1.024348] systemd-fstab-generator[736]: Ignoring "noauto" for root device
[ +0.265804] systemd-fstab-generator[775]: Ignoring "noauto" for root device
[ +0.110360] systemd-fstab-generator[786]: Ignoring "noauto" for root device
[ +0.114474] systemd-fstab-generator[799]: Ignoring "noauto" for root device
[ +1.453910] systemd-fstab-generator[956]: Ignoring "noauto" for root device
[ +0.105474] systemd-fstab-generator[967]: Ignoring "noauto" for root device
[ +0.109964] systemd-fstab-generator[978]: Ignoring "noauto" for root device
[ +0.108418] systemd-fstab-generator[989]: Ignoring "noauto" for root device
[ +0.119546] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
[ +4.078629] systemd-fstab-generator[1107]: Ignoring "noauto" for root device
[ +4.286668] kauditd_printk_skb: 53 callbacks suppressed
[ +3.588072] systemd-fstab-generator[1433]: Ignoring "noauto" for root device
[ +7.749620] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
[ +14.369829] kauditd_printk_skb: 39 callbacks suppressed
[ +7.238165] kauditd_printk_skb: 14 callbacks suppressed
*
* ==> etcd [2825b7061ea0] <==
* {"level":"info","ts":"2023-08-30T20:25:20.10645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a became candidate at term 2"}
{"level":"info","ts":"2023-08-30T20:25:20.106455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a received MsgVoteResp from 9b8de1e5bd82ef2a at term 2"}
{"level":"info","ts":"2023-08-30T20:25:20.106463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b8de1e5bd82ef2a became leader at term 2"}
{"level":"info","ts":"2023-08-30T20:25:20.10647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b8de1e5bd82ef2a elected leader 9b8de1e5bd82ef2a at term 2"}
{"level":"info","ts":"2023-08-30T20:25:20.107721Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-08-30T20:25:20.109839Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7053bcffcda7710c","local-member-id":"9b8de1e5bd82ef2a","cluster-version":"3.5"}
{"level":"info","ts":"2023-08-30T20:25:20.109933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-08-30T20:25:20.109951Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-08-30T20:25:20.10996Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-08-30T20:25:20.109969Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9b8de1e5bd82ef2a","local-member-attributes":"{Name:multinode-944570 ClientURLs:[https://192.168.39.254:2379]}","request-path":"/0/members/9b8de1e5bd82ef2a/attributes","cluster-id":"7053bcffcda7710c","publish-timeout":"7s"}
{"level":"info","ts":"2023-08-30T20:25:20.109995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-08-30T20:25:20.110999Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.254:2379"}
{"level":"info","ts":"2023-08-30T20:25:20.111054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-08-30T20:25:20.111326Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-08-30T20:25:20.111338Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-08-30T20:25:46.147973Z","caller":"traceutil/trace.go:171","msg":"trace[341296804] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"129.655578ms","start":"2023-08-30T20:25:46.018289Z","end":"2023-08-30T20:25:46.147945Z","steps":["trace[341296804] 'process raft request' (duration: 129.464671ms)"],"step_count":1}
{"level":"warn","ts":"2023-08-30T20:27:40.101784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.633799ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17233738966452560102 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-944570-m03.1780431ee2e77843\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-944570-m03.1780431ee2e77843\" value_size:642 lease:8010366929597783856 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2023-08-30T20:27:40.102087Z","caller":"traceutil/trace.go:171","msg":"trace[471184055] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"200.125754ms","start":"2023-08-30T20:27:39.901933Z","end":"2023-08-30T20:27:40.102059Z","steps":["trace[471184055] 'process raft request' (duration: 200.074805ms)"],"step_count":1}
{"level":"info","ts":"2023-08-30T20:27:40.102366Z","caller":"traceutil/trace.go:171","msg":"trace[1809665780] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"256.669107ms","start":"2023-08-30T20:27:39.845688Z","end":"2023-08-30T20:27:40.102357Z","steps":["trace[1809665780] 'process raft request' (duration: 85.440625ms)","trace[1809665780] 'compare' (duration: 169.276548ms)"],"step_count":2}
{"level":"info","ts":"2023-08-30T20:27:40.102543Z","caller":"traceutil/trace.go:171","msg":"trace[424198061] linearizableReadLoop","detail":"{readStateIndex:674; appliedIndex:673; }","duration":"237.07016ms","start":"2023-08-30T20:27:39.865464Z","end":"2023-08-30T20:27:40.102535Z","steps":["trace[424198061] 'read index received' (duration: 65.671284ms)","trace[424198061] 'applied index is now lower than readState.Index' (duration: 171.397525ms)"],"step_count":2}
{"level":"warn","ts":"2023-08-30T20:27:40.102776Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.321814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-944570-m03\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-08-30T20:27:40.102809Z","caller":"traceutil/trace.go:171","msg":"trace[116404222] range","detail":"{range_begin:/registry/csinodes/multinode-944570-m03; range_end:; response_count:0; response_revision:638; }","duration":"237.361556ms","start":"2023-08-30T20:27:39.86544Z","end":"2023-08-30T20:27:40.102801Z","steps":["trace[116404222] 'agreement among raft nodes before linearized reading' (duration: 237.282389ms)"],"step_count":1}
{"level":"info","ts":"2023-08-30T20:27:40.103049Z","caller":"traceutil/trace.go:171","msg":"trace[295565228] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"166.929682ms","start":"2023-08-30T20:27:39.936111Z","end":"2023-08-30T20:27:40.103041Z","steps":["trace[295565228] 'process raft request' (duration: 166.853071ms)"],"step_count":1}
{"level":"info","ts":"2023-08-30T20:27:44.060424Z","caller":"traceutil/trace.go:171","msg":"trace[134648594] transaction","detail":"{read_only:false; response_revision:666; number_of_response:1; }","duration":"181.595056ms","start":"2023-08-30T20:27:43.878784Z","end":"2023-08-30T20:27:44.060379Z","steps":["trace[134648594] 'process raft request' (duration: 181.444379ms)"],"step_count":1}
{"level":"info","ts":"2023-08-30T20:27:44.438441Z","caller":"traceutil/trace.go:171","msg":"trace[308397188] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"127.691503ms","start":"2023-08-30T20:27:44.310733Z","end":"2023-08-30T20:27:44.438425Z","steps":["trace[308397188] 'process raft request' (duration: 62.470279ms)","trace[308397188] 'compare' (duration: 65.144951ms)"],"step_count":2}
*
* ==> kernel <==
* 20:28:26 up 3 min, 0 users, load average: 0.27, 0.28, 0.12
Linux multinode-944570 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [750968b9a220] <==
* I0830 20:27:47.280213 1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
I0830 20:27:47.280271 1 main.go:227] handling current node
I0830 20:27:47.280298 1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
I0830 20:27:47.280308 1 main.go:250] Node multinode-944570-m02 has CIDR [10.244.1.0/24]
I0830 20:27:47.281466 1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
I0830 20:27:47.281564 1 main.go:250] Node multinode-944570-m03 has CIDR [10.244.2.0/24]
I0830 20:27:47.281797 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.83 Flags: [] Table: 0}
I0830 20:27:57.357932 1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
I0830 20:27:57.358004 1 main.go:227] handling current node
I0830 20:27:57.358026 1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
I0830 20:27:57.358034 1 main.go:250] Node multinode-944570-m02 has CIDR [10.244.1.0/24]
I0830 20:27:57.358662 1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
I0830 20:27:57.358683 1 main.go:250] Node multinode-944570-m03 has CIDR [10.244.2.0/24]
I0830 20:28:07.365459 1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
I0830 20:28:07.365952 1 main.go:227] handling current node
I0830 20:28:07.366146 1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
I0830 20:28:07.366278 1 main.go:250] Node multinode-944570-m02 has CIDR [10.244.1.0/24]
I0830 20:28:07.366571 1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
I0830 20:28:07.366737 1 main.go:250] Node multinode-944570-m03 has CIDR [10.244.2.0/24]
I0830 20:28:17.376317 1 main.go:223] Handling node with IPs: map[192.168.39.254:{}]
I0830 20:28:17.376366 1 main.go:227] handling current node
I0830 20:28:17.376381 1 main.go:223] Handling node with IPs: map[192.168.39.87:{}]
I0830 20:28:17.376388 1 main.go:250] Node multinode-944570-m02 has CIDR [10.244.1.0/24]
I0830 20:28:17.376960 1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
I0830 20:28:17.376995 1 main.go:250] Node multinode-944570-m03 has CIDR [10.244.2.0/24]
*
* ==> kube-apiserver [adc09d4d4deb] <==
* I0830 20:25:22.597041 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0830 20:25:22.597831 1 apf_controller.go:377] Running API Priority and Fairness config worker
I0830 20:25:22.597861 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I0830 20:25:22.600816 1 controller.go:624] quota admission added evaluator for: namespaces
I0830 20:25:22.610990 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I0830 20:25:22.634578 1 shared_informer.go:318] Caches are synced for crd-autoregister
I0830 20:25:22.635435 1 aggregator.go:166] initial CRD sync complete...
I0830 20:25:22.635464 1 autoregister_controller.go:141] Starting autoregister controller
I0830 20:25:22.635470 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0830 20:25:22.635476 1 cache.go:39] Caches are synced for autoregister controller
I0830 20:25:23.398767 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0830 20:25:23.408509 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0830 20:25:23.408550 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0830 20:25:24.099887 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0830 20:25:24.143668 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0830 20:25:24.218698 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0830 20:25:24.225161 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.254]
I0830 20:25:24.226021 1 controller.go:624] quota admission added evaluator for: endpoints
I0830 20:25:24.232830 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0830 20:25:24.514851 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0830 20:25:25.700267 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0830 20:25:25.718273 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0830 20:25:25.731768 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I0830 20:25:38.796450 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
I0830 20:25:38.810993 1 controller.go:624] quota admission added evaluator for: replicasets.apps
*
* ==> kube-controller-manager [b1920bbf2f90] <==
* I0830 20:26:49.038684 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hrz7d"
I0830 20:26:49.046556 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z8vqm"
I0830 20:26:53.780042 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-944570-m02"
I0830 20:26:53.780457 1 event.go:307] "Event occurred" object="multinode-944570-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-944570-m02 event: Registered Node multinode-944570-m02 in Controller"
I0830 20:27:00.888574 1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-944570-m02"
I0830 20:27:03.208580 1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
I0830 20:27:03.222358 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-n5m7r"
I0830 20:27:03.241367 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-fhrtd"
I0830 20:27:03.260453 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.74723ms"
I0830 20:27:03.282958 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="22.427535ms"
I0830 20:27:03.298407 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.179588ms"
I0830 20:27:03.298757 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.478µs"
I0830 20:27:03.790057 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-n5m7r" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-n5m7r"
I0830 20:27:06.109051 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.667083ms"
I0830 20:27:06.109785 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.77µs"
I0830 20:27:06.611733 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.529782ms"
I0830 20:27:06.611996 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.693µs"
I0830 20:27:40.105865 1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-944570-m02"
I0830 20:27:40.106964 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-944570-m03\" does not exist"
I0830 20:27:40.116181 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-944570-m03" podCIDRs=["10.244.2.0/24"]
I0830 20:27:40.130772 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6d9l8"
I0830 20:27:40.131880 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fdzvb"
I0830 20:27:43.797065 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-944570-m03"
I0830 20:27:43.797579 1 event.go:307] "Event occurred" object="multinode-944570-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-944570-m03 event: Registered Node multinode-944570-m03 in Controller"
I0830 20:27:52.544342 1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-944570-m03"
*
* ==> kube-proxy [9331896493b3] <==
* I0830 20:25:39.814424 1 server_others.go:69] "Using iptables proxy"
I0830 20:25:39.823656 1 node.go:141] Successfully retrieved node IP: 192.168.39.254
I0830 20:25:39.892141 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I0830 20:25:39.892181 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0830 20:25:39.894638 1 server_others.go:152] "Using iptables Proxier"
I0830 20:25:39.894732 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0830 20:25:39.894972 1 server.go:846] "Version info" version="v1.28.1"
I0830 20:25:39.894981 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0830 20:25:39.896868 1 config.go:188] "Starting service config controller"
I0830 20:25:39.896925 1 shared_informer.go:311] Waiting for caches to sync for service config
I0830 20:25:39.896944 1 config.go:97] "Starting endpoint slice config controller"
I0830 20:25:39.896948 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0830 20:25:39.899372 1 config.go:315] "Starting node config controller"
I0830 20:25:39.899405 1 shared_informer.go:311] Waiting for caches to sync for node config
I0830 20:25:39.997975 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0830 20:25:39.998085 1 shared_informer.go:318] Caches are synced for service config
I0830 20:25:40.000485 1 shared_informer.go:318] Caches are synced for node config
*
* ==> kube-scheduler [25034328bbdc] <==
* W0830 20:25:22.571162 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0830 20:25:22.571796 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0830 20:25:22.571229 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0830 20:25:22.571950 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0830 20:25:22.571266 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0830 20:25:22.572170 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0830 20:25:22.574705 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0830 20:25:22.574742 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0830 20:25:23.432411 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0830 20:25:23.432482 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0830 20:25:23.504487 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0830 20:25:23.504740 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0830 20:25:23.523851 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0830 20:25:23.524157 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0830 20:25:23.612877 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0830 20:25:23.612920 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0830 20:25:23.677056 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0830 20:25:23.677080 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0830 20:25:23.677125 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0830 20:25:23.677164 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0830 20:25:23.769321 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0830 20:25:23.769404 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0830 20:25:24.055162 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0830 20:25:24.055427 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0830 20:25:26.345499 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Wed 2023-08-30 20:24:49 UTC, ends at Wed 2023-08-30 20:28:26 UTC. --
Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.031865 2360 topology_manager.go:215] "Topology Admit Handler" podUID="4e79c194-f047-45a2-9ed4-ffafbe983cda" podNamespace="kube-system" podName="storage-provisioner"
Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.050071 2360 topology_manager.go:215] "Topology Admit Handler" podUID="19a6c9fa-86e0-4e7f-a62b-28ee984bdd45" podNamespace="kube-system" podName="coredns-5dd5756b68-lzj6n"
Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.170243 2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19a6c9fa-86e0-4e7f-a62b-28ee984bdd45-config-volume\") pod \"coredns-5dd5756b68-lzj6n\" (UID: \"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45\") " pod="kube-system/coredns-5dd5756b68-lzj6n"
Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.170321 2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4e79c194-f047-45a2-9ed4-ffafbe983cda-tmp\") pod \"storage-provisioner\" (UID: \"4e79c194-f047-45a2-9ed4-ffafbe983cda\") " pod="kube-system/storage-provisioner"
Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.170346 2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh2f6\" (UniqueName: \"kubernetes.io/projected/4e79c194-f047-45a2-9ed4-ffafbe983cda-kube-api-access-sh2f6\") pod \"storage-provisioner\" (UID: \"4e79c194-f047-45a2-9ed4-ffafbe983cda\") " pod="kube-system/storage-provisioner"
Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.170377 2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqp4\" (UniqueName: \"kubernetes.io/projected/19a6c9fa-86e0-4e7f-a62b-28ee984bdd45-kube-api-access-hbqp4\") pod \"coredns-5dd5756b68-lzj6n\" (UID: \"19a6c9fa-86e0-4e7f-a62b-28ee984bdd45\") " pod="kube-system/coredns-5dd5756b68-lzj6n"
Aug 30 20:25:51 multinode-944570 kubelet[2360]: I0830 20:25:51.940875 2360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1710b141f702688f2ac6c1123dd35b15c5c3dcf83e6a5b1ea4bbe967a5b28b11"
Aug 30 20:25:52 multinode-944570 kubelet[2360]: I0830 20:25:52.029409 2360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="206abe062563a58c0cfef43fd491b8a3cae33b87e0cc0fced346e41ef4ec84e9"
Aug 30 20:25:53 multinode-944570 kubelet[2360]: I0830 20:25:53.056685 2360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.056573184 podCreationTimestamp="2023-08-30 20:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 20:25:53.056381 +0000 UTC m=+27.388191130" watchObservedRunningTime="2023-08-30 20:25:53.056573184 +0000 UTC m=+27.388383315"
Aug 30 20:25:53 multinode-944570 kubelet[2360]: I0830 20:25:53.074074 2360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lzj6n" podStartSLOduration=15.074038115 podCreationTimestamp="2023-08-30 20:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 20:25:53.073980952 +0000 UTC m=+27.405791082" watchObservedRunningTime="2023-08-30 20:25:53.074038115 +0000 UTC m=+27.405848245"
Aug 30 20:26:26 multinode-944570 kubelet[2360]: E0830 20:26:26.047107 2360 iptables.go:575] "Could not set up iptables canary" err=<
Aug 30 20:26:26 multinode-944570 kubelet[2360]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 30 20:26:26 multinode-944570 kubelet[2360]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 30 20:26:26 multinode-944570 kubelet[2360]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 30 20:27:03 multinode-944570 kubelet[2360]: I0830 20:27:03.257824 2360 topology_manager.go:215] "Topology Admit Handler" podUID="d0a3ab29-c39e-48e3-8a1b-b64572e1729f" podNamespace="default" podName="busybox-5bc68d56bd-fhrtd"
Aug 30 20:27:03 multinode-944570 kubelet[2360]: I0830 20:27:03.375645 2360 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j47wk\" (UniqueName: \"kubernetes.io/projected/d0a3ab29-c39e-48e3-8a1b-b64572e1729f-kube-api-access-j47wk\") pod \"busybox-5bc68d56bd-fhrtd\" (UID: \"d0a3ab29-c39e-48e3-8a1b-b64572e1729f\") " pod="default/busybox-5bc68d56bd-fhrtd"
Aug 30 20:27:06 multinode-944570 kubelet[2360]: I0830 20:27:06.608752 2360 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-fhrtd" podStartSLOduration=1.706541547 podCreationTimestamp="2023-08-30 20:27:03 +0000 UTC" firstStartedPulling="2023-08-30 20:27:04.151331512 +0000 UTC m=+98.483141626" lastFinishedPulling="2023-08-30 20:27:06.052436217 +0000 UTC m=+100.384246343" observedRunningTime="2023-08-30 20:27:06.607311977 +0000 UTC m=+100.939122107" watchObservedRunningTime="2023-08-30 20:27:06.607646264 +0000 UTC m=+100.939456397"
Aug 30 20:27:26 multinode-944570 kubelet[2360]: E0830 20:27:26.045773 2360 iptables.go:575] "Could not set up iptables canary" err=<
Aug 30 20:27:26 multinode-944570 kubelet[2360]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 30 20:27:26 multinode-944570 kubelet[2360]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 30 20:27:26 multinode-944570 kubelet[2360]: > table="nat" chain="KUBE-KUBELET-CANARY"
Aug 30 20:28:26 multinode-944570 kubelet[2360]: E0830 20:28:26.048373 2360 iptables.go:575] "Could not set up iptables canary" err=<
Aug 30 20:28:26 multinode-944570 kubelet[2360]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Aug 30 20:28:26 multinode-944570 kubelet[2360]: Perhaps ip6tables or your kernel needs to be upgraded.
Aug 30 20:28:26 multinode-944570 kubelet[2360]: > table="nat" chain="KUBE-KUBELET-CANARY"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-944570 -n multinode-944570
helpers_test.go:261: (dbg) Run: kubectl --context multinode-944570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (20.66s)