=== RUN TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run: out/minikube-linux-amd64 -p multinode-935345 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-935345 node start m03 --alsologtostderr: exit status 90 (18.202482024s)
-- stdout --
* Starting worker node multinode-935345-m03 in cluster multinode-935345
* Restarting existing kvm2 VM for "multinode-935345-m03" ...
-- /stdout --
** stderr **
I0524 21:07:52.723122 29393 out.go:296] Setting OutFile to fd 1 ...
I0524 21:07:52.723254 29393 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 21:07:52.723262 29393 out.go:309] Setting ErrFile to fd 2...
I0524 21:07:52.723266 29393 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 21:07:52.723366 29393 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16572-7844/.minikube/bin
I0524 21:07:52.723605 29393 mustload.go:65] Loading cluster: multinode-935345
I0524 21:07:52.723920 29393 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:07:52.724221 29393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:07:52.724264 29393 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:07:52.738512 29393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34199
I0524 21:07:52.738920 29393 main.go:141] libmachine: () Calling .GetVersion
I0524 21:07:52.739434 29393 main.go:141] libmachine: Using API Version 1
I0524 21:07:52.739453 29393 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:07:52.739787 29393 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:07:52.739976 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetState
W0524 21:07:52.741394 29393 host.go:58] "multinode-935345-m03" host status: Stopped
I0524 21:07:52.744037 29393 out.go:177] * Starting worker node multinode-935345-m03 in cluster multinode-935345
I0524 21:07:52.745766 29393 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:07:52.745792 29393 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
I0524 21:07:52.745807 29393 cache.go:57] Caching tarball of preloaded images
I0524 21:07:52.745878 29393 preload.go:174] Found /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0524 21:07:52.745889 29393 cache.go:60] Finished verifying existence of preloaded tar for v1.27.2 on docker
I0524 21:07:52.746040 29393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:07:52.746214 29393 cache.go:195] Successfully downloaded all kic artifacts
I0524 21:07:52.746233 29393 start.go:364] acquiring machines lock for multinode-935345-m03: {Name:mk4a40b66c29ad20ca421f9aaaf38de8f4a54848 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0524 21:07:52.746296 29393 start.go:368] acquired machines lock for "multinode-935345-m03" in 29.064µs
I0524 21:07:52.746312 29393 start.go:96] Skipping create...Using existing machine configuration
I0524 21:07:52.746320 29393 fix.go:55] fixHost starting: m03
I0524 21:07:52.746674 29393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:07:52.746698 29393 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:07:52.760831 29393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
I0524 21:07:52.761215 29393 main.go:141] libmachine: () Calling .GetVersion
I0524 21:07:52.761704 29393 main.go:141] libmachine: Using API Version 1
I0524 21:07:52.761723 29393 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:07:52.762027 29393 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:07:52.762216 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:07:52.762371 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetState
I0524 21:07:52.763673 29393 fix.go:103] recreateIfNeeded on multinode-935345-m03: state=Stopped err=<nil>
I0524 21:07:52.763696 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
W0524 21:07:52.763849 29393 fix.go:129] unexpected machine state, will restart: <nil>
I0524 21:07:52.766182 29393 out.go:177] * Restarting existing kvm2 VM for "multinode-935345-m03" ...
I0524 21:07:52.767787 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .Start
I0524 21:07:52.767931 29393 main.go:141] libmachine: (multinode-935345-m03) Ensuring networks are active...
I0524 21:07:52.768646 29393 main.go:141] libmachine: (multinode-935345-m03) Ensuring network default is active
I0524 21:07:52.768933 29393 main.go:141] libmachine: (multinode-935345-m03) Ensuring network mk-multinode-935345 is active
I0524 21:07:52.769251 29393 main.go:141] libmachine: (multinode-935345-m03) Getting domain xml...
I0524 21:07:52.769947 29393 main.go:141] libmachine: (multinode-935345-m03) Creating domain...
I0524 21:07:54.040214 29393 main.go:141] libmachine: (multinode-935345-m03) Waiting to get IP...
I0524 21:07:54.041268 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:07:54.041661 29393 main.go:141] libmachine: (multinode-935345-m03) Found IP for machine: 192.168.39.9
I0524 21:07:54.041698 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has current primary IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:07:54.041708 29393 main.go:141] libmachine: (multinode-935345-m03) Reserving static IP address...
I0524 21:07:54.042153 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "multinode-935345-m03", mac: "52:54:00:3b:b8:89", ip: "192.168.39.9"} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:07:54.042178 29393 main.go:141] libmachine: (multinode-935345-m03) Reserved static IP address: 192.168.39.9
I0524 21:07:54.042196 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | skip adding static IP to network mk-multinode-935345 - found existing host DHCP lease matching {name: "multinode-935345-m03", mac: "52:54:00:3b:b8:89", ip: "192.168.39.9"}
I0524 21:07:54.042212 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | Getting to WaitForSSH function...
I0524 21:07:54.042230 29393 main.go:141] libmachine: (multinode-935345-m03) Waiting for SSH to be available...
I0524 21:07:54.044399 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:07:54.044718 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:07:54.044750 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:07:54.044904 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | Using SSH client type: external
I0524 21:07:54.044933 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa (-rw-------)
I0524 21:07:54.045026 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0524 21:07:54.045059 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | About to run SSH command:
I0524 21:07:54.045073 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | exit 0
I0524 21:08:06.150490 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | SSH cmd err, output: <nil>:
I0524 21:08:06.150948 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetConfigRaw
I0524 21:08:06.151570 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetIP
I0524 21:08:06.154088 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.154457 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.154486 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.154774 29393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:08:06.154947 29393 machine.go:88] provisioning docker machine ...
I0524 21:08:06.154966 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:06.155153 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetMachineName
I0524 21:08:06.155300 29393 buildroot.go:166] provisioning hostname "multinode-935345-m03"
I0524 21:08:06.155313 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetMachineName
I0524 21:08:06.155425 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.157595 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.157926 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.157952 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.158112 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.158252 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.158361 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.158505 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.158711 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.159172 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.159189 29393 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-935345-m03 && echo "multinode-935345-m03" | sudo tee /etc/hostname
I0524 21:08:06.313290 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-935345-m03
I0524 21:08:06.313356 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.316150 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.316478 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.316508 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.316687 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.316871 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.317058 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.317144 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.317290 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.317689 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.317713 29393 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-935345-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-935345-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-935345-m03' | sudo tee -a /etc/hosts;
fi
fi
I0524 21:08:06.449770 29393 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0524 21:08:06.449798 29393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16572-7844/.minikube CaCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16572-7844/.minikube}
I0524 21:08:06.449852 29393 buildroot.go:174] setting up certificates
I0524 21:08:06.449865 29393 provision.go:83] configureAuth start
I0524 21:08:06.449883 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetMachineName
I0524 21:08:06.450171 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetIP
I0524 21:08:06.452657 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.453041 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.453070 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.453207 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.455392 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.455791 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.455833 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.455929 29393 provision.go:138] copyHostCerts
I0524 21:08:06.455999 29393 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem, removing ...
I0524 21:08:06.456008 29393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem
I0524 21:08:06.456066 29393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem (1123 bytes)
I0524 21:08:06.456166 29393 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem, removing ...
I0524 21:08:06.456174 29393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem
I0524 21:08:06.456202 29393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem (1679 bytes)
I0524 21:08:06.456260 29393 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem, removing ...
I0524 21:08:06.456267 29393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem
I0524 21:08:06.456285 29393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem (1078 bytes)
I0524 21:08:06.456342 29393 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem org=jenkins.multinode-935345-m03 san=[192.168.39.9 192.168.39.9 localhost 127.0.0.1 minikube multinode-935345-m03]
I0524 21:08:06.529907 29393 provision.go:172] copyRemoteCerts
I0524 21:08:06.529963 29393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0524 21:08:06.529983 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.532439 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.532784 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.532817 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.533020 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.533212 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.533375 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.533513 29393 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa Username:docker}
I0524 21:08:06.625384 29393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0524 21:08:06.648994 29393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0524 21:08:06.672057 29393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0524 21:08:06.695108 29393 provision.go:86] duration metric: configureAuth took 245.22724ms
I0524 21:08:06.695137 29393 buildroot.go:189] setting minikube options for container-runtime
I0524 21:08:06.695376 29393 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:08:06.695414 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:06.695720 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.697981 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.698365 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.698400 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.698539 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.698720 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.698878 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.699001 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.699142 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.699721 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.699739 29393 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0524 21:08:06.828273 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0524 21:08:06.828300 29393 buildroot.go:70] root file system type: tmpfs
I0524 21:08:06.828427 29393 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0524 21:08:06.828458 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.831063 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.831427 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.831456 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.831639 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.831829 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.832011 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.832152 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.832368 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.832934 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.833036 29393 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0524 21:08:06.973085 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0524 21:08:06.973122 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.975709 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.976017 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.976039 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.976242 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.976419 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.976561 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.976687 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.976849 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.977454 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.977491 29393 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0524 21:08:07.818790 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0524 21:08:07.818821 29393 machine.go:91] provisioned docker machine in 1.663861009s
I0524 21:08:07.818833 29393 start.go:300] post-start starting for "multinode-935345-m03" (driver="kvm2")
I0524 21:08:07.818841 29393 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0524 21:08:07.818861 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:07.819214 29393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0524 21:08:07.819245 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:07.821820 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:07.822209 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:07.822230 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:07.822364 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:07.822564 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:07.822691 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:07.822826 29393 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa Username:docker}
I0524 21:08:07.916029 29393 ssh_runner.go:195] Run: cat /etc/os-release
I0524 21:08:07.920171 29393 info.go:137] Remote host: Buildroot 2021.02.12
I0524 21:08:07.920187 29393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16572-7844/.minikube/addons for local assets ...
I0524 21:08:07.920242 29393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16572-7844/.minikube/files for local assets ...
I0524 21:08:07.920307 29393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem -> 150552.pem in /etc/ssl/certs
I0524 21:08:07.920386 29393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0524 21:08:07.929237 29393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem --> /etc/ssl/certs/150552.pem (1708 bytes)
I0524 21:08:07.951429 29393 start.go:303] post-start completed in 132.583001ms
I0524 21:08:07.951446 29393 fix.go:57] fixHost completed within 15.205124999s
I0524 21:08:07.951468 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:07.953743 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:07.954047 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:07.954077 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:07.954237 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:07.954427 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:07.954629 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:07.954791 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:07.954971 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:07.955556 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:07.955573 29393 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0524 21:08:08.083407 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684962488.031261266
I0524 21:08:08.083427 29393 fix.go:207] guest clock: 1684962488.031261266
I0524 21:08:08.083445 29393 fix.go:220] Guest: 2023-05-24 21:08:08.031261266 +0000 UTC Remote: 2023-05-24 21:08:07.951450288 +0000 UTC m=+15.259104499 (delta=79.810978ms)
I0524 21:08:08.083471 29393 fix.go:191] guest clock delta is within tolerance: 79.810978ms
I0524 21:08:08.083481 29393 start.go:83] releasing machines lock for "multinode-935345-m03", held for 15.337174935s
I0524 21:08:08.083510 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:08.083781 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetIP
I0524 21:08:08.086380 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.086770 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:08.086792 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.087009 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:08.087476 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:08.087663 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:08.087743 29393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0524 21:08:08.087790 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:08.087887 29393 ssh_runner.go:195] Run: systemctl --version
I0524 21:08:08.087916 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:08.090344 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.090418 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.090811 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:08.090841 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.090870 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:08.090889 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.090975 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:08.091104 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:08.091180 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:08.091244 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:08.091314 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:08.091366 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:08.091427 29393 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa Username:docker}
I0524 21:08:08.091476 29393 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa Username:docker}
I0524 21:08:08.212573 29393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0524 21:08:08.218588 29393 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0524 21:08:08.218645 29393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0524 21:08:08.237733 29393 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0524 21:08:08.237757 29393 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:08:08.237841 29393 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0524 21:08:08.261421 29393 docker.go:633] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
kindest/kindnetd:v20230511-dc714da8
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0524 21:08:08.261441 29393 docker.go:563] Images already preloaded, skipping extraction
I0524 21:08:08.261448 29393 start.go:481] detecting cgroup driver to use...
I0524 21:08:08.261557 29393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 21:08:08.278464 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0524 21:08:08.288399 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0524 21:08:08.299547 29393 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0524 21:08:08.299607 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0524 21:08:08.311057 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 21:08:08.322161 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0524 21:08:08.333679 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 21:08:08.344862 29393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0524 21:08:08.356312 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0524 21:08:08.367373 29393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0524 21:08:08.375991 29393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0524 21:08:08.384771 29393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:08:08.483338 29393 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0524 21:08:08.500832 29393 start.go:481] detecting cgroup driver to use...
I0524 21:08:08.500915 29393 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0524 21:08:08.515568 29393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0524 21:08:08.530456 29393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0524 21:08:08.549533 29393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0524 21:08:08.561186 29393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 21:08:08.573211 29393 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0524 21:08:08.600928 29393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 21:08:08.613648 29393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 21:08:08.630477 29393 ssh_runner.go:195] Run: which cri-dockerd
I0524 21:08:08.634500 29393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0524 21:08:08.644665 29393 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0524 21:08:08.660290 29393 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0524 21:08:08.758267 29393 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0524 21:08:08.867396 29393 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
I0524 21:08:08.867421 29393 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0524 21:08:08.883447 29393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:08:08.980924 29393 ssh_runner.go:195] Run: sudo systemctl restart docker
I0524 21:08:10.419637 29393 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.438668975s)
I0524 21:08:10.419690 29393 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0524 21:08:10.518113 29393 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0524 21:08:10.629098 29393 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0524 21:08:10.739411 29393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:08:10.865467 29393 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0524 21:08:10.880832 29393 out.go:177]
W0524 21:08:10.882639 29393 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0524 21:08:10.882653 29393 out.go:239] *
*
W0524 21:08:10.885697 29393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0524 21:08:10.887548 29393 out.go:177]
** /stderr **
multinode_test.go:256: I0524 21:07:52.723122 29393 out.go:296] Setting OutFile to fd 1 ...
I0524 21:07:52.723254 29393 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 21:07:52.723262 29393 out.go:309] Setting ErrFile to fd 2...
I0524 21:07:52.723266 29393 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 21:07:52.723366 29393 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16572-7844/.minikube/bin
I0524 21:07:52.723605 29393 mustload.go:65] Loading cluster: multinode-935345
I0524 21:07:52.723920 29393 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:07:52.724221 29393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:07:52.724264 29393 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:07:52.738512 29393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34199
I0524 21:07:52.738920 29393 main.go:141] libmachine: () Calling .GetVersion
I0524 21:07:52.739434 29393 main.go:141] libmachine: Using API Version 1
I0524 21:07:52.739453 29393 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:07:52.739787 29393 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:07:52.739976 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetState
W0524 21:07:52.741394 29393 host.go:58] "multinode-935345-m03" host status: Stopped
I0524 21:07:52.744037 29393 out.go:177] * Starting worker node multinode-935345-m03 in cluster multinode-935345
I0524 21:07:52.745766 29393 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:07:52.745792 29393 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
I0524 21:07:52.745807 29393 cache.go:57] Caching tarball of preloaded images
I0524 21:07:52.745878 29393 preload.go:174] Found /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0524 21:07:52.745889 29393 cache.go:60] Finished verifying existence of preloaded tar for v1.27.2 on docker
I0524 21:07:52.746040 29393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:07:52.746214 29393 cache.go:195] Successfully downloaded all kic artifacts
I0524 21:07:52.746233 29393 start.go:364] acquiring machines lock for multinode-935345-m03: {Name:mk4a40b66c29ad20ca421f9aaaf38de8f4a54848 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0524 21:07:52.746296 29393 start.go:368] acquired machines lock for "multinode-935345-m03" in 29.064µs
I0524 21:07:52.746312 29393 start.go:96] Skipping create...Using existing machine configuration
I0524 21:07:52.746320 29393 fix.go:55] fixHost starting: m03
I0524 21:07:52.746674 29393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:07:52.746698 29393 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:07:52.760831 29393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
I0524 21:07:52.761215 29393 main.go:141] libmachine: () Calling .GetVersion
I0524 21:07:52.761704 29393 main.go:141] libmachine: Using API Version 1
I0524 21:07:52.761723 29393 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:07:52.762027 29393 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:07:52.762216 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:07:52.762371 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetState
I0524 21:07:52.763673 29393 fix.go:103] recreateIfNeeded on multinode-935345-m03: state=Stopped err=<nil>
I0524 21:07:52.763696 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
W0524 21:07:52.763849 29393 fix.go:129] unexpected machine state, will restart: <nil>
I0524 21:07:52.766182 29393 out.go:177] * Restarting existing kvm2 VM for "multinode-935345-m03" ...
I0524 21:07:52.767787 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .Start
I0524 21:07:52.767931 29393 main.go:141] libmachine: (multinode-935345-m03) Ensuring networks are active...
I0524 21:07:52.768646 29393 main.go:141] libmachine: (multinode-935345-m03) Ensuring network default is active
I0524 21:07:52.768933 29393 main.go:141] libmachine: (multinode-935345-m03) Ensuring network mk-multinode-935345 is active
I0524 21:07:52.769251 29393 main.go:141] libmachine: (multinode-935345-m03) Getting domain xml...
I0524 21:07:52.769947 29393 main.go:141] libmachine: (multinode-935345-m03) Creating domain...
I0524 21:07:54.040214 29393 main.go:141] libmachine: (multinode-935345-m03) Waiting to get IP...
I0524 21:07:54.041268 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:07:54.041661 29393 main.go:141] libmachine: (multinode-935345-m03) Found IP for machine: 192.168.39.9
I0524 21:07:54.041698 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has current primary IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:07:54.041708 29393 main.go:141] libmachine: (multinode-935345-m03) Reserving static IP address...
I0524 21:07:54.042153 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "multinode-935345-m03", mac: "52:54:00:3b:b8:89", ip: "192.168.39.9"} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:07:54.042178 29393 main.go:141] libmachine: (multinode-935345-m03) Reserved static IP address: 192.168.39.9
I0524 21:07:54.042196 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | skip adding static IP to network mk-multinode-935345 - found existing host DHCP lease matching {name: "multinode-935345-m03", mac: "52:54:00:3b:b8:89", ip: "192.168.39.9"}
I0524 21:07:54.042212 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | Getting to WaitForSSH function...
I0524 21:07:54.042230 29393 main.go:141] libmachine: (multinode-935345-m03) Waiting for SSH to be available...
I0524 21:07:54.044399 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:07:54.044718 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:07:54.044750 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:07:54.044904 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | Using SSH client type: external
I0524 21:07:54.044933 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa (-rw-------)
I0524 21:07:54.045026 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0524 21:07:54.045059 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | About to run SSH command:
I0524 21:07:54.045073 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | exit 0
I0524 21:08:06.150490 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | SSH cmd err, output: <nil>:
I0524 21:08:06.150948 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetConfigRaw
I0524 21:08:06.151570 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetIP
I0524 21:08:06.154088 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.154457 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.154486 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.154774 29393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:08:06.154947 29393 machine.go:88] provisioning docker machine ...
I0524 21:08:06.154966 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:06.155153 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetMachineName
I0524 21:08:06.155300 29393 buildroot.go:166] provisioning hostname "multinode-935345-m03"
I0524 21:08:06.155313 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetMachineName
I0524 21:08:06.155425 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.157595 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.157926 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.157952 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.158112 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.158252 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.158361 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.158505 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.158711 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.159172 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.159189 29393 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-935345-m03 && echo "multinode-935345-m03" | sudo tee /etc/hostname
I0524 21:08:06.313290 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-935345-m03
I0524 21:08:06.313356 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.316150 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.316478 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.316508 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.316687 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.316871 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.317058 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.317144 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.317290 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.317689 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.317713 29393 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-935345-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-935345-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-935345-m03' | sudo tee -a /etc/hosts;
fi
fi
I0524 21:08:06.449770 29393 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0524 21:08:06.449798 29393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16572-7844/.minikube CaCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16572-7844/.minikube}
I0524 21:08:06.449852 29393 buildroot.go:174] setting up certificates
I0524 21:08:06.449865 29393 provision.go:83] configureAuth start
I0524 21:08:06.449883 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetMachineName
I0524 21:08:06.450171 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetIP
I0524 21:08:06.452657 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.453041 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.453070 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.453207 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.455392 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.455791 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.455833 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.455929 29393 provision.go:138] copyHostCerts
I0524 21:08:06.455999 29393 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem, removing ...
I0524 21:08:06.456008 29393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem
I0524 21:08:06.456066 29393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem (1123 bytes)
I0524 21:08:06.456166 29393 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem, removing ...
I0524 21:08:06.456174 29393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem
I0524 21:08:06.456202 29393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem (1679 bytes)
I0524 21:08:06.456260 29393 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem, removing ...
I0524 21:08:06.456267 29393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem
I0524 21:08:06.456285 29393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem (1078 bytes)
I0524 21:08:06.456342 29393 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem org=jenkins.multinode-935345-m03 san=[192.168.39.9 192.168.39.9 localhost 127.0.0.1 minikube multinode-935345-m03]
I0524 21:08:06.529907 29393 provision.go:172] copyRemoteCerts
I0524 21:08:06.529963 29393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0524 21:08:06.529983 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.532439 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.532784 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.532817 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.533020 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.533212 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.533375 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.533513 29393 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa Username:docker}
I0524 21:08:06.625384 29393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0524 21:08:06.648994 29393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0524 21:08:06.672057 29393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0524 21:08:06.695108 29393 provision.go:86] duration metric: configureAuth took 245.22724ms
I0524 21:08:06.695137 29393 buildroot.go:189] setting minikube options for container-runtime
I0524 21:08:06.695376 29393 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:08:06.695414 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:06.695720 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.697981 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.698365 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.698400 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.698539 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.698720 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.698878 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.699001 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.699142 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.699721 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.699739 29393 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0524 21:08:06.828273 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0524 21:08:06.828300 29393 buildroot.go:70] root file system type: tmpfs
I0524 21:08:06.828427 29393 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0524 21:08:06.828458 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.831063 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.831427 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.831456 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.831639 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.831829 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.832011 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.832152 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.832368 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.832934 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.833036 29393 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0524 21:08:06.973085 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0524 21:08:06.973122 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:06.975709 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.976017 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:06.976039 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:06.976242 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:06.976419 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.976561 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:06.976687 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:06.976849 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:06.977454 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:06.977491 29393 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0524 21:08:07.818790 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0524 21:08:07.818821 29393 machine.go:91] provisioned docker machine in 1.663861009s
I0524 21:08:07.818833 29393 start.go:300] post-start starting for "multinode-935345-m03" (driver="kvm2")
I0524 21:08:07.818841 29393 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0524 21:08:07.818861 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:07.819214 29393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0524 21:08:07.819245 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:07.821820 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:07.822209 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:07.822230 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:07.822364 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:07.822564 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:07.822691 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:07.822826 29393 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa Username:docker}
I0524 21:08:07.916029 29393 ssh_runner.go:195] Run: cat /etc/os-release
I0524 21:08:07.920171 29393 info.go:137] Remote host: Buildroot 2021.02.12
I0524 21:08:07.920187 29393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16572-7844/.minikube/addons for local assets ...
I0524 21:08:07.920242 29393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16572-7844/.minikube/files for local assets ...
I0524 21:08:07.920307 29393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem -> 150552.pem in /etc/ssl/certs
I0524 21:08:07.920386 29393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0524 21:08:07.929237 29393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem --> /etc/ssl/certs/150552.pem (1708 bytes)
I0524 21:08:07.951429 29393 start.go:303] post-start completed in 132.583001ms
I0524 21:08:07.951446 29393 fix.go:57] fixHost completed within 15.205124999s
I0524 21:08:07.951468 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:07.953743 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:07.954047 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:07.954077 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:07.954237 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:07.954427 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:07.954629 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:07.954791 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:07.954971 29393 main.go:141] libmachine: Using SSH client type: native
I0524 21:08:07.955556 29393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.9 22 <nil> <nil>}
I0524 21:08:07.955573 29393 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0524 21:08:08.083407 29393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684962488.031261266
I0524 21:08:08.083427 29393 fix.go:207] guest clock: 1684962488.031261266
I0524 21:08:08.083445 29393 fix.go:220] Guest: 2023-05-24 21:08:08.031261266 +0000 UTC Remote: 2023-05-24 21:08:07.951450288 +0000 UTC m=+15.259104499 (delta=79.810978ms)
I0524 21:08:08.083471 29393 fix.go:191] guest clock delta is within tolerance: 79.810978ms
I0524 21:08:08.083481 29393 start.go:83] releasing machines lock for "multinode-935345-m03", held for 15.337174935s
I0524 21:08:08.083510 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:08.083781 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetIP
I0524 21:08:08.086380 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.086770 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:08.086792 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.087009 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:08.087476 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:08.087663 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .DriverName
I0524 21:08:08.087743 29393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0524 21:08:08.087790 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:08.087887 29393 ssh_runner.go:195] Run: systemctl --version
I0524 21:08:08.087916 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHHostname
I0524 21:08:08.090344 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.090418 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.090811 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:08.090841 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.090870 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:b8:89", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:07:01 +0000 UTC Type:0 Mac:52:54:00:3b:b8:89 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-935345-m03 Clientid:01:52:54:00:3b:b8:89}
I0524 21:08:08.090889 29393 main.go:141] libmachine: (multinode-935345-m03) DBG | domain multinode-935345-m03 has defined IP address 192.168.39.9 and MAC address 52:54:00:3b:b8:89 in network mk-multinode-935345
I0524 21:08:08.090975 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:08.091104 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHPort
I0524 21:08:08.091180 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:08.091244 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHKeyPath
I0524 21:08:08.091314 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:08.091366 29393 main.go:141] libmachine: (multinode-935345-m03) Calling .GetSSHUsername
I0524 21:08:08.091427 29393 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa Username:docker}
I0524 21:08:08.091476 29393 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m03/id_rsa Username:docker}
I0524 21:08:08.212573 29393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0524 21:08:08.218588 29393 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0524 21:08:08.218645 29393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0524 21:08:08.237733 29393 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0524 21:08:08.237757 29393 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:08:08.237841 29393 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0524 21:08:08.261421 29393 docker.go:633] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
kindest/kindnetd:v20230511-dc714da8
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0524 21:08:08.261441 29393 docker.go:563] Images already preloaded, skipping extraction
I0524 21:08:08.261448 29393 start.go:481] detecting cgroup driver to use...
I0524 21:08:08.261557 29393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 21:08:08.278464 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0524 21:08:08.288399 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0524 21:08:08.299547 29393 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0524 21:08:08.299607 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0524 21:08:08.311057 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 21:08:08.322161 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0524 21:08:08.333679 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 21:08:08.344862 29393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0524 21:08:08.356312 29393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0524 21:08:08.367373 29393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0524 21:08:08.375991 29393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0524 21:08:08.384771 29393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:08:08.483338 29393 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0524 21:08:08.500832 29393 start.go:481] detecting cgroup driver to use...
I0524 21:08:08.500915 29393 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0524 21:08:08.515568 29393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0524 21:08:08.530456 29393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0524 21:08:08.549533 29393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0524 21:08:08.561186 29393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 21:08:08.573211 29393 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0524 21:08:08.600928 29393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 21:08:08.613648 29393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 21:08:08.630477 29393 ssh_runner.go:195] Run: which cri-dockerd
I0524 21:08:08.634500 29393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0524 21:08:08.644665 29393 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0524 21:08:08.660290 29393 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0524 21:08:08.758267 29393 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0524 21:08:08.867396 29393 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
I0524 21:08:08.867421 29393 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0524 21:08:08.883447 29393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:08:08.980924 29393 ssh_runner.go:195] Run: sudo systemctl restart docker
I0524 21:08:10.419637 29393 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.438668975s)
I0524 21:08:10.419690 29393 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0524 21:08:10.518113 29393 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0524 21:08:10.629098 29393 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0524 21:08:10.739411 29393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:08:10.865467 29393 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0524 21:08:10.880832 29393 out.go:177]
W0524 21:08:10.882639 29393 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0524 21:08:10.882653 29393 out.go:239] *
*
W0524 21:08:10.885697 29393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0524 21:08:10.887548 29393 out.go:177]
multinode_test.go:257: node start returned an error. args "out/minikube-linux-amd64 -p multinode-935345 node start m03 --alsologtostderr": exit status 90
multinode_test.go:261: (dbg) Run: out/minikube-linux-amd64 -p multinode-935345 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-935345 status: exit status 2 (556.123054ms)
-- stdout --
multinode-935345
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
multinode-935345-m02
type: Worker
host: Running
kubelet: Running
multinode-935345-m03
type: Worker
host: Running
kubelet: Stopped
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-935345 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-935345 -n multinode-935345
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-935345 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-935345 logs -n 25: (1.064397413s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| cp | multinode-935345 cp multinode-935345:/home/docker/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m03:/home/docker/cp-test_multinode-935345_multinode-935345-m03.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-935345 ssh -n multinode-935345-m03 sudo cat | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | /home/docker/cp-test_multinode-935345_multinode-935345-m03.txt | | | | | |
| cp | multinode-935345 cp testdata/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m02:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-935345 cp multinode-935345-m02:/home/docker/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | /tmp/TestMultiNodeserialCopyFile3564445534/001/cp-test_multinode-935345-m02.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-935345 cp multinode-935345-m02:/home/docker/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345:/home/docker/cp-test_multinode-935345-m02_multinode-935345.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-935345 ssh -n multinode-935345 sudo cat | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | /home/docker/cp-test_multinode-935345-m02_multinode-935345.txt | | | | | |
| cp | multinode-935345 cp multinode-935345-m02:/home/docker/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m03:/home/docker/cp-test_multinode-935345-m02_multinode-935345-m03.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-935345 ssh -n multinode-935345-m03 sudo cat | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | /home/docker/cp-test_multinode-935345-m02_multinode-935345-m03.txt | | | | | |
| cp | multinode-935345 cp testdata/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-935345 cp multinode-935345-m03:/home/docker/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | /tmp/TestMultiNodeserialCopyFile3564445534/001/cp-test_multinode-935345-m03.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-935345 cp multinode-935345-m03:/home/docker/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345:/home/docker/cp-test_multinode-935345-m03_multinode-935345.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-935345 ssh -n multinode-935345 sudo cat | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | /home/docker/cp-test_multinode-935345-m03_multinode-935345.txt | | | | | |
| cp | multinode-935345 cp multinode-935345-m03:/home/docker/cp-test.txt | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m02:/home/docker/cp-test_multinode-935345-m03_multinode-935345-m02.txt | | | | | |
| ssh | multinode-935345 ssh -n | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | multinode-935345-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-935345 ssh -n multinode-935345-m02 sudo cat | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| | /home/docker/cp-test_multinode-935345-m03_multinode-935345-m02.txt | | | | | |
| node | multinode-935345 node stop m03 | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | 24 May 23 21:07 UTC |
| node | multinode-935345 node start | multinode-935345 | jenkins | v1.30.1 | 24 May 23 21:07 UTC | |
| | m03 --alsologtostderr | | | | | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/05/24 21:04:33
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.20.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0524 21:04:33.452484 26754 out.go:296] Setting OutFile to fd 1 ...
I0524 21:04:33.452636 26754 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 21:04:33.452645 26754 out.go:309] Setting ErrFile to fd 2...
I0524 21:04:33.452652 26754 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 21:04:33.452765 26754 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16572-7844/.minikube/bin
I0524 21:04:33.453306 26754 out.go:303] Setting JSON to false
I0524 21:04:33.454152 26754 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2829,"bootTime":1684959445,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1034-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0524 21:04:33.454205 26754 start.go:135] virtualization: kvm guest
I0524 21:04:33.456839 26754 out.go:177] * [multinode-935345] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
I0524 21:04:33.458367 26754 out.go:177] - MINIKUBE_LOCATION=16572
I0524 21:04:33.458324 26754 notify.go:220] Checking for updates...
I0524 21:04:33.460030 26754 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0524 21:04:33.461748 26754 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/16572-7844/kubeconfig
I0524 21:04:33.463360 26754 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/16572-7844/.minikube
I0524 21:04:33.464910 26754 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0524 21:04:33.466493 26754 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0524 21:04:33.468189 26754 driver.go:375] Setting default libvirt URI to qemu:///system
I0524 21:04:33.501986 26754 out.go:177] * Using the kvm2 driver based on user configuration
I0524 21:04:33.503481 26754 start.go:295] selected driver: kvm2
I0524 21:04:33.503493 26754 start.go:870] validating driver "kvm2" against <nil>
I0524 21:04:33.503505 26754 start.go:881] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0524 21:04:33.504106 26754 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0524 21:04:33.504181 26754 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16572-7844/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0524 21:04:33.517575 26754 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.30.1
I0524 21:04:33.517622 26754 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0524 21:04:33.517804 26754 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0524 21:04:33.517831 26754 cni.go:84] Creating CNI manager for ""
I0524 21:04:33.517837 26754 cni.go:136] 0 nodes found, recommending kindnet
I0524 21:04:33.517845 26754 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
I0524 21:04:33.517860 26754 start_flags.go:319] config:
{Name:multinode-935345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684885407-16572@sha256:1678a360739dac48ad7fdd0fcdfd8f9af43ced0b54ec5cd320e5a35a4c50c733 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-935345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0524 21:04:33.517974 26754 iso.go:125] acquiring lock: {Name:mk7e3ecf4058bd3be0314fba10a03ee4519bccda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0524 21:04:33.520063 26754 out.go:177] * Starting control plane node multinode-935345 in cluster multinode-935345
I0524 21:04:33.521746 26754 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:04:33.521774 26754 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
I0524 21:04:33.521788 26754 cache.go:57] Caching tarball of preloaded images
I0524 21:04:33.521855 26754 preload.go:174] Found /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0524 21:04:33.521864 26754 cache.go:60] Finished verifying existence of preloaded tar for v1.27.2 on docker
I0524 21:04:33.522123 26754 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:04:33.522140 26754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json: {Name:mka4f3c6c02776475c87c0a02b436b950ab72154 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:04:33.522246 26754 cache.go:195] Successfully downloaded all kic artifacts
I0524 21:04:33.522263 26754 start.go:364] acquiring machines lock for multinode-935345: {Name:mk4a40b66c29ad20ca421f9aaaf38de8f4a54848 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0524 21:04:33.522286 26754 start.go:368] acquired machines lock for "multinode-935345" in 13.826µs
I0524 21:04:33.522301 26754 start.go:93] Provisioning new machine with config: &{Name:multinode-935345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684885407-16572@sha256:1678a360739dac48ad7fdd0fcdfd8f9af43ced0b54ec5cd320e5a35a4c50c733 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.27.2 ClusterName:multinode-935345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0524 21:04:33.522358 26754 start.go:125] createHost starting for "" (driver="kvm2")
I0524 21:04:33.524373 26754 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0524 21:04:33.524484 26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:04:33.524526 26754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:04:33.537539 26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40551
I0524 21:04:33.537874 26754 main.go:141] libmachine: () Calling .GetVersion
I0524 21:04:33.538426 26754 main.go:141] libmachine: Using API Version 1
I0524 21:04:33.538444 26754 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:04:33.538781 26754 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:04:33.538949 26754 main.go:141] libmachine: (multinode-935345) Calling .GetMachineName
I0524 21:04:33.539083 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:33.539192 26754 start.go:159] libmachine.API.Create for "multinode-935345" (driver="kvm2")
I0524 21:04:33.539226 26754 client.go:168] LocalClient.Create starting
I0524 21:04:33.539259 26754 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem
I0524 21:04:33.539286 26754 main.go:141] libmachine: Decoding PEM data...
I0524 21:04:33.539303 26754 main.go:141] libmachine: Parsing certificate...
I0524 21:04:33.539347 26754 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem
I0524 21:04:33.539364 26754 main.go:141] libmachine: Decoding PEM data...
I0524 21:04:33.539376 26754 main.go:141] libmachine: Parsing certificate...
I0524 21:04:33.539393 26754 main.go:141] libmachine: Running pre-create checks...
I0524 21:04:33.539403 26754 main.go:141] libmachine: (multinode-935345) Calling .PreCreateCheck
I0524 21:04:33.539671 26754 main.go:141] libmachine: (multinode-935345) Calling .GetConfigRaw
I0524 21:04:33.540027 26754 main.go:141] libmachine: Creating machine...
I0524 21:04:33.540040 26754 main.go:141] libmachine: (multinode-935345) Calling .Create
I0524 21:04:33.540180 26754 main.go:141] libmachine: (multinode-935345) Creating KVM machine...
I0524 21:04:33.541310 26754 main.go:141] libmachine: (multinode-935345) DBG | found existing default KVM network
I0524 21:04:33.542060 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:33.541911 26777 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298a0}
I0524 21:04:33.546963 26754 main.go:141] libmachine: (multinode-935345) DBG | trying to create private KVM network mk-multinode-935345 192.168.39.0/24...
I0524 21:04:33.611952 26754 main.go:141] libmachine: (multinode-935345) DBG | private KVM network mk-multinode-935345 192.168.39.0/24 created
I0524 21:04:33.611974 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:33.611914 26777 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/16572-7844/.minikube
I0524 21:04:33.611984 26754 main.go:141] libmachine: (multinode-935345) Setting up store path in /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345 ...
I0524 21:04:33.612014 26754 main.go:141] libmachine: (multinode-935345) Building disk image from file:///home/jenkins/minikube-integration/16572-7844/.minikube/cache/iso/amd64/minikube-v1.30.1-1684885329-16572-amd64.iso
I0524 21:04:33.612070 26754 main.go:141] libmachine: (multinode-935345) Downloading /home/jenkins/minikube-integration/16572-7844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16572-7844/.minikube/cache/iso/amd64/minikube-v1.30.1-1684885329-16572-amd64.iso...
I0524 21:04:33.803029 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:33.802882 26777 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa...
I0524 21:04:33.944272 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:33.944155 26777 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/multinode-935345.rawdisk...
I0524 21:04:33.944308 26754 main.go:141] libmachine: (multinode-935345) DBG | Writing magic tar header
I0524 21:04:33.944417 26754 main.go:141] libmachine: (multinode-935345) DBG | Writing SSH key tar header
I0524 21:04:33.944466 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:33.944268 26777 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345 ...
I0524 21:04:33.944487 26754 main.go:141] libmachine: (multinode-935345) Setting executable bit set on /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345 (perms=drwx------)
I0524 21:04:33.944514 26754 main.go:141] libmachine: (multinode-935345) Setting executable bit set on /home/jenkins/minikube-integration/16572-7844/.minikube/machines (perms=drwxrwxr-x)
I0524 21:04:33.944532 26754 main.go:141] libmachine: (multinode-935345) Setting executable bit set on /home/jenkins/minikube-integration/16572-7844/.minikube (perms=drwxr-xr-x)
I0524 21:04:33.944543 26754 main.go:141] libmachine: (multinode-935345) Setting executable bit set on /home/jenkins/minikube-integration/16572-7844 (perms=drwxrwxr-x)
I0524 21:04:33.944558 26754 main.go:141] libmachine: (multinode-935345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345
I0524 21:04:33.944569 26754 main.go:141] libmachine: (multinode-935345) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0524 21:04:33.944586 26754 main.go:141] libmachine: (multinode-935345) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0524 21:04:33.944603 26754 main.go:141] libmachine: (multinode-935345) Creating domain...
I0524 21:04:33.944619 26754 main.go:141] libmachine: (multinode-935345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16572-7844/.minikube/machines
I0524 21:04:33.944631 26754 main.go:141] libmachine: (multinode-935345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16572-7844/.minikube
I0524 21:04:33.944647 26754 main.go:141] libmachine: (multinode-935345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16572-7844
I0524 21:04:33.944660 26754 main.go:141] libmachine: (multinode-935345) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0524 21:04:33.944670 26754 main.go:141] libmachine: (multinode-935345) DBG | Checking permissions on dir: /home/jenkins
I0524 21:04:33.944675 26754 main.go:141] libmachine: (multinode-935345) DBG | Checking permissions on dir: /home
I0524 21:04:33.944702 26754 main.go:141] libmachine: (multinode-935345) DBG | Skipping /home - not owner
I0524 21:04:33.945524 26754 main.go:141] libmachine: (multinode-935345) define libvirt domain using xml:
I0524 21:04:33.945548 26754 main.go:141] libmachine: (multinode-935345) <domain type='kvm'>
I0524 21:04:33.945560 26754 main.go:141] libmachine: (multinode-935345) <name>multinode-935345</name>
I0524 21:04:33.945579 26754 main.go:141] libmachine: (multinode-935345) <memory unit='MiB'>2200</memory>
I0524 21:04:33.945592 26754 main.go:141] libmachine: (multinode-935345) <vcpu>2</vcpu>
I0524 21:04:33.945600 26754 main.go:141] libmachine: (multinode-935345) <features>
I0524 21:04:33.945606 26754 main.go:141] libmachine: (multinode-935345) <acpi/>
I0524 21:04:33.945613 26754 main.go:141] libmachine: (multinode-935345) <apic/>
I0524 21:04:33.945633 26754 main.go:141] libmachine: (multinode-935345) <pae/>
I0524 21:04:33.945652 26754 main.go:141] libmachine: (multinode-935345)
I0524 21:04:33.945669 26754 main.go:141] libmachine: (multinode-935345) </features>
I0524 21:04:33.945682 26754 main.go:141] libmachine: (multinode-935345) <cpu mode='host-passthrough'>
I0524 21:04:33.945702 26754 main.go:141] libmachine: (multinode-935345)
I0524 21:04:33.945714 26754 main.go:141] libmachine: (multinode-935345) </cpu>
I0524 21:04:33.945727 26754 main.go:141] libmachine: (multinode-935345) <os>
I0524 21:04:33.945749 26754 main.go:141] libmachine: (multinode-935345) <type>hvm</type>
I0524 21:04:33.945764 26754 main.go:141] libmachine: (multinode-935345) <boot dev='cdrom'/>
I0524 21:04:33.945773 26754 main.go:141] libmachine: (multinode-935345) <boot dev='hd'/>
I0524 21:04:33.945781 26754 main.go:141] libmachine: (multinode-935345) <bootmenu enable='no'/>
I0524 21:04:33.945789 26754 main.go:141] libmachine: (multinode-935345) </os>
I0524 21:04:33.945796 26754 main.go:141] libmachine: (multinode-935345) <devices>
I0524 21:04:33.945803 26754 main.go:141] libmachine: (multinode-935345) <disk type='file' device='cdrom'>
I0524 21:04:33.945816 26754 main.go:141] libmachine: (multinode-935345) <source file='/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/boot2docker.iso'/>
I0524 21:04:33.945826 26754 main.go:141] libmachine: (multinode-935345) <target dev='hdc' bus='scsi'/>
I0524 21:04:33.945847 26754 main.go:141] libmachine: (multinode-935345) <readonly/>
I0524 21:04:33.945855 26754 main.go:141] libmachine: (multinode-935345) </disk>
I0524 21:04:33.945863 26754 main.go:141] libmachine: (multinode-935345) <disk type='file' device='disk'>
I0524 21:04:33.945875 26754 main.go:141] libmachine: (multinode-935345) <driver name='qemu' type='raw' cache='default' io='threads' />
I0524 21:04:33.945885 26754 main.go:141] libmachine: (multinode-935345) <source file='/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/multinode-935345.rawdisk'/>
I0524 21:04:33.945894 26754 main.go:141] libmachine: (multinode-935345) <target dev='hda' bus='virtio'/>
I0524 21:04:33.945901 26754 main.go:141] libmachine: (multinode-935345) </disk>
I0524 21:04:33.945908 26754 main.go:141] libmachine: (multinode-935345) <interface type='network'>
I0524 21:04:33.945919 26754 main.go:141] libmachine: (multinode-935345) <source network='mk-multinode-935345'/>
I0524 21:04:33.945927 26754 main.go:141] libmachine: (multinode-935345) <model type='virtio'/>
I0524 21:04:33.945938 26754 main.go:141] libmachine: (multinode-935345) </interface>
I0524 21:04:33.945947 26754 main.go:141] libmachine: (multinode-935345) <interface type='network'>
I0524 21:04:33.945955 26754 main.go:141] libmachine: (multinode-935345) <source network='default'/>
I0524 21:04:33.945966 26754 main.go:141] libmachine: (multinode-935345) <model type='virtio'/>
I0524 21:04:33.945974 26754 main.go:141] libmachine: (multinode-935345) </interface>
I0524 21:04:33.945980 26754 main.go:141] libmachine: (multinode-935345) <serial type='pty'>
I0524 21:04:33.945997 26754 main.go:141] libmachine: (multinode-935345) <target port='0'/>
I0524 21:04:33.946018 26754 main.go:141] libmachine: (multinode-935345) </serial>
I0524 21:04:33.946044 26754 main.go:141] libmachine: (multinode-935345) <console type='pty'>
I0524 21:04:33.946063 26754 main.go:141] libmachine: (multinode-935345) <target type='serial' port='0'/>
I0524 21:04:33.946084 26754 main.go:141] libmachine: (multinode-935345) </console>
I0524 21:04:33.946103 26754 main.go:141] libmachine: (multinode-935345) <rng model='virtio'>
I0524 21:04:33.946120 26754 main.go:141] libmachine: (multinode-935345) <backend model='random'>/dev/random</backend>
I0524 21:04:33.946133 26754 main.go:141] libmachine: (multinode-935345) </rng>
I0524 21:04:33.946143 26754 main.go:141] libmachine: (multinode-935345)
I0524 21:04:33.946152 26754 main.go:141] libmachine: (multinode-935345)
I0524 21:04:33.946162 26754 main.go:141] libmachine: (multinode-935345) </devices>
I0524 21:04:33.946178 26754 main.go:141] libmachine: (multinode-935345) </domain>
I0524 21:04:33.946190 26754 main.go:141] libmachine: (multinode-935345)
I0524 21:04:33.950753 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:22:a2:55 in network default
I0524 21:04:33.951259 26754 main.go:141] libmachine: (multinode-935345) Ensuring networks are active...
I0524 21:04:33.951280 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:33.951863 26754 main.go:141] libmachine: (multinode-935345) Ensuring network default is active
I0524 21:04:33.952125 26754 main.go:141] libmachine: (multinode-935345) Ensuring network mk-multinode-935345 is active
I0524 21:04:33.952558 26754 main.go:141] libmachine: (multinode-935345) Getting domain xml...
I0524 21:04:33.953197 26754 main.go:141] libmachine: (multinode-935345) Creating domain...
I0524 21:04:35.139169 26754 main.go:141] libmachine: (multinode-935345) Waiting to get IP...
I0524 21:04:35.139896 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:35.140319 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:35.140412 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:35.140317 26777 retry.go:31] will retry after 305.730314ms: waiting for machine to come up
I0524 21:04:35.447984 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:35.448421 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:35.448451 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:35.448357 26777 retry.go:31] will retry after 270.005352ms: waiting for machine to come up
I0524 21:04:35.719935 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:35.720323 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:35.720348 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:35.720272 26777 retry.go:31] will retry after 331.708422ms: waiting for machine to come up
I0524 21:04:36.053822 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:36.054167 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:36.054194 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:36.054125 26777 retry.go:31] will retry after 609.888776ms: waiting for machine to come up
I0524 21:04:36.665947 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:36.666328 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:36.666353 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:36.666284 26777 retry.go:31] will retry after 460.084523ms: waiting for machine to come up
I0524 21:04:37.128952 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:37.129307 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:37.129339 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:37.129271 26777 retry.go:31] will retry after 785.473895ms: waiting for machine to come up
I0524 21:04:37.916710 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:37.917156 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:37.917208 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:37.917131 26777 retry.go:31] will retry after 882.441511ms: waiting for machine to come up
I0524 21:04:38.801553 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:38.801914 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:38.801946 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:38.801877 26777 retry.go:31] will retry after 909.782519ms: waiting for machine to come up
I0524 21:04:39.712825 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:39.713174 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:39.713203 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:39.713116 26777 retry.go:31] will retry after 1.135782968s: waiting for machine to come up
I0524 21:04:40.850409 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:40.850808 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:40.850837 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:40.850760 26777 retry.go:31] will retry after 1.855895951s: waiting for machine to come up
I0524 21:04:42.708779 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:42.709200 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:42.709223 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:42.709153 26777 retry.go:31] will retry after 2.044697841s: waiting for machine to come up
I0524 21:04:44.756257 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:44.756711 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:44.756743 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:44.756680 26777 retry.go:31] will retry after 2.264675069s: waiting for machine to come up
I0524 21:04:47.024320 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:47.024655 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:47.024692 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:47.024624 26777 retry.go:31] will retry after 3.91976867s: waiting for machine to come up
I0524 21:04:50.947078 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:50.947420 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find current IP address of domain multinode-935345 in network mk-multinode-935345
I0524 21:04:50.947454 26754 main.go:141] libmachine: (multinode-935345) DBG | I0524 21:04:50.947381 26777 retry.go:31] will retry after 4.772323647s: waiting for machine to come up
I0524 21:04:55.721906 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:55.722311 26754 main.go:141] libmachine: (multinode-935345) Found IP for machine: 192.168.39.141
I0524 21:04:55.722345 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has current primary IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:55.722357 26754 main.go:141] libmachine: (multinode-935345) Reserving static IP address...
I0524 21:04:55.722634 26754 main.go:141] libmachine: (multinode-935345) DBG | unable to find host DHCP lease matching {name: "multinode-935345", mac: "52:54:00:5f:95:e5", ip: "192.168.39.141"} in network mk-multinode-935345
I0524 21:04:55.791577 26754 main.go:141] libmachine: (multinode-935345) DBG | Getting to WaitForSSH function...
I0524 21:04:55.791609 26754 main.go:141] libmachine: (multinode-935345) Reserved static IP address: 192.168.39.141
I0524 21:04:55.791653 26754 main.go:141] libmachine: (multinode-935345) Waiting for SSH to be available...
I0524 21:04:55.793989 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:55.794353 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:55.794380 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:55.794498 26754 main.go:141] libmachine: (multinode-935345) DBG | Using SSH client type: external
I0524 21:04:55.794525 26754 main.go:141] libmachine: (multinode-935345) DBG | Using SSH private key: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa (-rw-------)
I0524 21:04:55.794580 26754 main.go:141] libmachine: (multinode-935345) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa -p 22] /usr/bin/ssh <nil>}
I0524 21:04:55.794606 26754 main.go:141] libmachine: (multinode-935345) DBG | About to run SSH command:
I0524 21:04:55.794623 26754 main.go:141] libmachine: (multinode-935345) DBG | exit 0
I0524 21:04:55.882138 26754 main.go:141] libmachine: (multinode-935345) DBG | SSH cmd err, output: <nil>:
I0524 21:04:55.882372 26754 main.go:141] libmachine: (multinode-935345) KVM machine creation complete!
I0524 21:04:55.882691 26754 main.go:141] libmachine: (multinode-935345) Calling .GetConfigRaw
I0524 21:04:55.883307 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:55.883492 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:55.883641 26754 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0524 21:04:55.883658 26754 main.go:141] libmachine: (multinode-935345) Calling .GetState
I0524 21:04:55.884794 26754 main.go:141] libmachine: Detecting operating system of created instance...
I0524 21:04:55.884810 26754 main.go:141] libmachine: Waiting for SSH to be available...
I0524 21:04:55.884822 26754 main.go:141] libmachine: Getting to WaitForSSH function...
I0524 21:04:55.884829 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:55.887087 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:55.887442 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:55.887480 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:55.887617 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:55.887771 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:55.887903 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:55.888021 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:55.888201 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:04:55.888662 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.141 22 <nil> <nil>}
I0524 21:04:55.888676 26754 main.go:141] libmachine: About to run SSH command:
exit 0
I0524 21:04:56.001489 26754 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0524 21:04:56.001526 26754 main.go:141] libmachine: Detecting the provisioner...
I0524 21:04:56.001538 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:56.003951 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.004281 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.004308 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.004491 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:56.004676 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.004825 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.004971 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:56.005134 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:04:56.005558 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.141 22 <nil> <nil>}
I0524 21:04:56.005573 26754 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0524 21:04:56.119374 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-g05a3382-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0524 21:04:56.119448 26754 main.go:141] libmachine: found compatible host: buildroot
I0524 21:04:56.119458 26754 main.go:141] libmachine: Provisioning with buildroot...
I0524 21:04:56.119469 26754 main.go:141] libmachine: (multinode-935345) Calling .GetMachineName
I0524 21:04:56.119725 26754 buildroot.go:166] provisioning hostname "multinode-935345"
I0524 21:04:56.119754 26754 main.go:141] libmachine: (multinode-935345) Calling .GetMachineName
I0524 21:04:56.119977 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:56.122581 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.122884 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.122916 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.123045 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:56.123215 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.123366 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.123490 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:56.123622 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:04:56.124027 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.141 22 <nil> <nil>}
I0524 21:04:56.124041 26754 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-935345 && echo "multinode-935345" | sudo tee /etc/hostname
I0524 21:04:56.246675 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-935345
I0524 21:04:56.246707 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:56.249494 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.249826 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.249856 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.250018 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:56.250188 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.250311 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.250419 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:56.250574 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:04:56.250986 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.141 22 <nil> <nil>}
I0524 21:04:56.251006 26754 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-935345' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-935345/g' /etc/hosts;
else
echo '127.0.1.1 multinode-935345' | sudo tee -a /etc/hosts;
fi
fi
I0524 21:04:56.369896 26754 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0524 21:04:56.369917 26754 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16572-7844/.minikube CaCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16572-7844/.minikube}
I0524 21:04:56.369934 26754 buildroot.go:174] setting up certificates
I0524 21:04:56.369949 26754 provision.go:83] configureAuth start
I0524 21:04:56.369956 26754 main.go:141] libmachine: (multinode-935345) Calling .GetMachineName
I0524 21:04:56.370200 26754 main.go:141] libmachine: (multinode-935345) Calling .GetIP
I0524 21:04:56.372638 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.372961 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.372991 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.373071 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:56.375006 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.375308 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.375334 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.375496 26754 provision.go:138] copyHostCerts
I0524 21:04:56.375523 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem
I0524 21:04:56.375552 26754 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem, removing ...
I0524 21:04:56.375560 26754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem
I0524 21:04:56.375607 26754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem (1123 bytes)
I0524 21:04:56.375688 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem
I0524 21:04:56.375707 26754 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem, removing ...
I0524 21:04:56.375714 26754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem
I0524 21:04:56.375734 26754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem (1679 bytes)
I0524 21:04:56.375786 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem
I0524 21:04:56.375808 26754 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem, removing ...
I0524 21:04:56.375814 26754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem
I0524 21:04:56.375832 26754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem (1078 bytes)
I0524 21:04:56.375888 26754 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem org=jenkins.multinode-935345 san=[192.168.39.141 192.168.39.141 localhost 127.0.0.1 minikube multinode-935345]
I0524 21:04:56.512305 26754 provision.go:172] copyRemoteCerts
I0524 21:04:56.512366 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0524 21:04:56.512387 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:56.514864 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.515220 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.515253 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.515421 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:56.515598 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.515751 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:56.515877 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa Username:docker}
I0524 21:04:56.604380 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0524 21:04:56.604459 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0524 21:04:56.625828 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem -> /etc/docker/server.pem
I0524 21:04:56.625923 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0524 21:04:56.648341 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0524 21:04:56.648404 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0524 21:04:56.669760 26754 provision.go:86] duration metric: configureAuth took 299.797349ms
I0524 21:04:56.669790 26754 buildroot.go:189] setting minikube options for container-runtime
I0524 21:04:56.670003 26754 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:04:56.670042 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:56.670343 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:56.672974 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.673339 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.673368 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.673468 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:56.673661 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.673835 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.673971 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:56.674128 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:04:56.674518 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.141 22 <nil> <nil>}
I0524 21:04:56.674535 26754 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0524 21:04:56.787951 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0524 21:04:56.787976 26754 buildroot.go:70] root file system type: tmpfs
I0524 21:04:56.788102 26754 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0524 21:04:56.788130 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:56.790647 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.790991 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.791014 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.791153 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:56.791325 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.791492 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.791614 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:56.791791 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:04:56.792247 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.141 22 <nil> <nil>}
I0524 21:04:56.792327 26754 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0524 21:04:56.914995 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0524 21:04:56.915032 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:56.917545 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.917845 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:56.917871 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:56.918017 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:56.918188 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.918343 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:56.918525 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:56.918768 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:04:56.919223 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.141 22 <nil> <nil>}
I0524 21:04:56.919248 26754 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0524 21:04:57.695774 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0524 21:04:57.695803 26754 main.go:141] libmachine: Checking connection to Docker...
I0524 21:04:57.695815 26754 main.go:141] libmachine: (multinode-935345) Calling .GetURL
I0524 21:04:57.696925 26754 main.go:141] libmachine: (multinode-935345) DBG | Using libvirt version 6000000
I0524 21:04:57.698971 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.699312 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:57.699334 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.699471 26754 main.go:141] libmachine: Docker is up and running!
I0524 21:04:57.699483 26754 main.go:141] libmachine: Reticulating splines...
I0524 21:04:57.699489 26754 client.go:171] LocalClient.Create took 24.160256231s
I0524 21:04:57.699508 26754 start.go:167] duration metric: libmachine.API.Create for "multinode-935345" took 24.160315895s
I0524 21:04:57.699520 26754 start.go:300] post-start starting for "multinode-935345" (driver="kvm2")
I0524 21:04:57.699534 26754 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0524 21:04:57.699562 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:57.699791 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0524 21:04:57.699816 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:57.701643 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.701930 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:57.701964 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.702131 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:57.702296 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:57.702459 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:57.702573 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa Username:docker}
I0524 21:04:57.787223 26754 ssh_runner.go:195] Run: cat /etc/os-release
I0524 21:04:57.791169 26754 command_runner.go:130] > NAME=Buildroot
I0524 21:04:57.791191 26754 command_runner.go:130] > VERSION=2021.02.12-1-g05a3382-dirty
I0524 21:04:57.791196 26754 command_runner.go:130] > ID=buildroot
I0524 21:04:57.791202 26754 command_runner.go:130] > VERSION_ID=2021.02.12
I0524 21:04:57.791209 26754 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0524 21:04:57.791440 26754 info.go:137] Remote host: Buildroot 2021.02.12
I0524 21:04:57.791472 26754 filesync.go:126] Scanning /home/jenkins/minikube-integration/16572-7844/.minikube/addons for local assets ...
I0524 21:04:57.791542 26754 filesync.go:126] Scanning /home/jenkins/minikube-integration/16572-7844/.minikube/files for local assets ...
I0524 21:04:57.791629 26754 filesync.go:149] local asset: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem -> 150552.pem in /etc/ssl/certs
I0524 21:04:57.791638 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem -> /etc/ssl/certs/150552.pem
I0524 21:04:57.791712 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0524 21:04:57.799503 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem --> /etc/ssl/certs/150552.pem (1708 bytes)
I0524 21:04:57.821938 26754 start.go:303] post-start completed in 122.400033ms
I0524 21:04:57.821985 26754 main.go:141] libmachine: (multinode-935345) Calling .GetConfigRaw
I0524 21:04:57.822512 26754 main.go:141] libmachine: (multinode-935345) Calling .GetIP
I0524 21:04:57.824875 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.825225 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:57.825259 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.825522 26754 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:04:57.825698 26754 start.go:128] duration metric: createHost completed in 24.303332762s
I0524 21:04:57.825724 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:57.827787 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.828134 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:57.828165 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.828288 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:57.828486 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:57.828746 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:57.828917 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:57.829107 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:04:57.829480 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.141 22 <nil> <nil>}
I0524 21:04:57.829490 26754 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0524 21:04:57.943078 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684962297.914123812
I0524 21:04:57.943100 26754 fix.go:207] guest clock: 1684962297.914123812
I0524 21:04:57.943108 26754 fix.go:220] Guest: 2023-05-24 21:04:57.914123812 +0000 UTC Remote: 2023-05-24 21:04:57.825710598 +0000 UTC m=+24.403180545 (delta=88.413214ms)
I0524 21:04:57.943132 26754 fix.go:191] guest clock delta is within tolerance: 88.413214ms
I0524 21:04:57.943137 26754 start.go:83] releasing machines lock for "multinode-935345", held for 24.420842767s
I0524 21:04:57.943171 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:57.943440 26754 main.go:141] libmachine: (multinode-935345) Calling .GetIP
I0524 21:04:57.946341 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.946667 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:57.946711 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.946855 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:57.947399 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:57.947587 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:04:57.947682 26754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0524 21:04:57.947730 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:57.947785 26754 ssh_runner.go:195] Run: cat /version.json
I0524 21:04:57.947807 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:04:57.950333 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.950413 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.950736 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:57.950765 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.950822 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:04:57.950845 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:04:57.950902 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:57.951113 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:04:57.951164 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:57.951313 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:04:57.951346 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:57.951408 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:04:57.951484 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa Username:docker}
I0524 21:04:57.951530 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa Username:docker}
I0524 21:04:58.031037 26754 command_runner.go:130] > {"iso_version": "v1.30.1-1684885329-16572", "kicbase_version": "v0.0.39-1684536746-16501", "minikube_version": "v1.30.1", "commit": "c14698eb4f629999281449d12fc0eb253a634d9a"}
I0524 21:04:58.031165 26754 ssh_runner.go:195] Run: systemctl --version
I0524 21:04:58.063769 26754 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0524 21:04:58.063831 26754 command_runner.go:130] > systemd 247 (247)
I0524 21:04:58.063852 26754 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0524 21:04:58.063917 26754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0524 21:04:58.069135 26754 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0524 21:04:58.069261 26754 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0524 21:04:58.069328 26754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0524 21:04:58.085153 26754 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0524 21:04:58.085183 26754 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0524 21:04:58.085193 26754 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:04:58.085263 26754 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0524 21:04:58.104962 26754 docker.go:633] Got preloaded images:
I0524 21:04:58.104976 26754 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
I0524 21:04:58.105011 26754 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0524 21:04:58.113793 26754 command_runner.go:139] > {"Repositories":{}}
I0524 21:04:58.113900 26754 ssh_runner.go:195] Run: which lz4
I0524 21:04:58.117422 26754 command_runner.go:130] > /usr/bin/lz4
I0524 21:04:58.117448 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0524 21:04:58.117515 26754 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0524 21:04:58.121039 26754 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0524 21:04:58.121239 26754 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0524 21:04:58.121265 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412256110 bytes)
I0524 21:04:59.712618 26754 docker.go:597] Took 1.595118 seconds to copy over tarball
I0524 21:04:59.712698 26754 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0524 21:05:02.320961 26754 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.608241318s)
I0524 21:05:02.320985 26754 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0524 21:05:02.358024 26754 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0524 21:05:02.367896 26754 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.7-0":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83":"sha256:86b6af7dd652c1b38118be1c338e
9354b33469e69a218f7e290a0ca5304ad681"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.27.2":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.27.2":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.27.2":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d
32174dc13e7dee"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.27.2":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
I0524 21:05:02.368013 26754 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
I0524 21:05:02.383650 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:05:02.481465 26754 ssh_runner.go:195] Run: sudo systemctl restart docker
I0524 21:05:05.963539 26754 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.482040744s)
I0524 21:05:05.963573 26754 start.go:481] detecting cgroup driver to use...
I0524 21:05:05.963689 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 21:05:05.981183 26754 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0524 21:05:05.981736 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0524 21:05:05.990776 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0524 21:05:05.999638 26754 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0524 21:05:05.999699 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0524 21:05:06.008714 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 21:05:06.017707 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0524 21:05:06.026218 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 21:05:06.034759 26754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0524 21:05:06.043444 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0524 21:05:06.051863 26754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0524 21:05:06.059356 26754 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0524 21:05:06.059409 26754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0524 21:05:06.066950 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:05:06.167992 26754 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0524 21:05:06.184744 26754 start.go:481] detecting cgroup driver to use...
I0524 21:05:06.184810 26754 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0524 21:05:06.196973 26754 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0524 21:05:06.197828 26754 command_runner.go:130] > [Unit]
I0524 21:05:06.197841 26754 command_runner.go:130] > Description=Docker Application Container Engine
I0524 21:05:06.197846 26754 command_runner.go:130] > Documentation=https://docs.docker.com
I0524 21:05:06.197851 26754 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0524 21:05:06.197857 26754 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0524 21:05:06.197863 26754 command_runner.go:130] > StartLimitBurst=3
I0524 21:05:06.197872 26754 command_runner.go:130] > StartLimitIntervalSec=60
I0524 21:05:06.197875 26754 command_runner.go:130] > [Service]
I0524 21:05:06.197879 26754 command_runner.go:130] > Type=notify
I0524 21:05:06.197887 26754 command_runner.go:130] > Restart=on-failure
I0524 21:05:06.197894 26754 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0524 21:05:06.197905 26754 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0524 21:05:06.197914 26754 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0524 21:05:06.197922 26754 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0524 21:05:06.197931 26754 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0524 21:05:06.197941 26754 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0524 21:05:06.197952 26754 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0524 21:05:06.197970 26754 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0524 21:05:06.197988 26754 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0524 21:05:06.197994 26754 command_runner.go:130] > ExecStart=
I0524 21:05:06.198012 26754 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0524 21:05:06.198022 26754 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0524 21:05:06.198036 26754 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0524 21:05:06.198045 26754 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0524 21:05:06.198054 26754 command_runner.go:130] > LimitNOFILE=infinity
I0524 21:05:06.198063 26754 command_runner.go:130] > LimitNPROC=infinity
I0524 21:05:06.198074 26754 command_runner.go:130] > LimitCORE=infinity
I0524 21:05:06.198085 26754 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0524 21:05:06.198096 26754 command_runner.go:130] > # Only systemd 226 and above support this version.
I0524 21:05:06.198100 26754 command_runner.go:130] > TasksMax=infinity
I0524 21:05:06.198105 26754 command_runner.go:130] > TimeoutStartSec=0
I0524 21:05:06.198111 26754 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0524 21:05:06.198117 26754 command_runner.go:130] > Delegate=yes
I0524 21:05:06.198123 26754 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0524 21:05:06.198131 26754 command_runner.go:130] > KillMode=process
I0524 21:05:06.198139 26754 command_runner.go:130] > [Install]
I0524 21:05:06.198157 26754 command_runner.go:130] > WantedBy=multi-user.target
I0524 21:05:06.198412 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0524 21:05:06.210329 26754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0524 21:05:06.226909 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0524 21:05:06.238702 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 21:05:06.250045 26754 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0524 21:05:06.278468 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 21:05:06.291354 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 21:05:06.305866 26754 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0524 21:05:06.306568 26754 ssh_runner.go:195] Run: which cri-dockerd
I0524 21:05:06.309837 26754 command_runner.go:130] > /usr/bin/cri-dockerd
I0524 21:05:06.309910 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0524 21:05:06.318423 26754 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0524 21:05:06.333502 26754 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0524 21:05:06.438796 26754 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0524 21:05:06.547396 26754 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
I0524 21:05:06.547430 26754 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0524 21:05:06.563118 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:05:06.661934 26754 ssh_runner.go:195] Run: sudo systemctl restart docker
I0524 21:05:08.068422 26754 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.406450466s)
I0524 21:05:08.068489 26754 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0524 21:05:08.166582 26754 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0524 21:05:08.273669 26754 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0524 21:05:08.381699 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:05:08.480754 26754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0524 21:05:08.585943 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:05:08.688573 26754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0524 21:05:08.766693 26754 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0524 21:05:08.766773 26754 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0524 21:05:08.772169 26754 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0524 21:05:08.772187 26754 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0524 21:05:08.772198 26754 command_runner.go:130] > Device: 16h/22d Inode: 948 Links: 1
I0524 21:05:08.772204 26754 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0524 21:05:08.772228 26754 command_runner.go:130] > Access: 2023-05-24 21:05:08.686246646 +0000
I0524 21:05:08.772238 26754 command_runner.go:130] > Modify: 2023-05-24 21:05:08.686246646 +0000
I0524 21:05:08.772249 26754 command_runner.go:130] > Change: 2023-05-24 21:05:08.689249334 +0000
I0524 21:05:08.772255 26754 command_runner.go:130] > Birth: -
I0524 21:05:08.772548 26754 start.go:549] Will wait 60s for crictl version
I0524 21:05:08.772607 26754 ssh_runner.go:195] Run: which crictl
I0524 21:05:08.778156 26754 command_runner.go:130] > /usr/bin/crictl
I0524 21:05:08.778267 26754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0524 21:05:08.820567 26754 command_runner.go:130] > Version: 0.1.0
I0524 21:05:08.820594 26754 command_runner.go:130] > RuntimeName: docker
I0524 21:05:08.820605 26754 command_runner.go:130] > RuntimeVersion: 24.0.1
I0524 21:05:08.820612 26754 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0524 21:05:08.822169 26754 start.go:565] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.1
RuntimeApiVersion: v1alpha2
I0524 21:05:08.822234 26754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0524 21:05:08.852063 26754 command_runner.go:130] > 24.0.1
I0524 21:05:08.853201 26754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0524 21:05:08.878536 26754 command_runner.go:130] > 24.0.1
I0524 21:05:08.968847 26754 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.1 ...
I0524 21:05:08.968897 26754 main.go:141] libmachine: (multinode-935345) Calling .GetIP
I0524 21:05:08.971779 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:05:08.972181 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:05:08.972213 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:05:08.972391 26754 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0524 21:05:08.977005 26754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0524 21:05:08.989439 26754 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:05:08.989492 26754 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0524 21:05:09.008265 26754 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
I0524 21:05:09.008286 26754 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
I0524 21:05:09.008292 26754 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
I0524 21:05:09.008297 26754 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
I0524 21:05:09.008302 26754 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I0524 21:05:09.008306 26754 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
I0524 21:05:09.008310 26754 command_runner.go:130] > registry.k8s.io/pause:3.9
I0524 21:05:09.008315 26754 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0524 21:05:09.009292 26754 docker.go:633] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0524 21:05:09.009310 26754 docker.go:563] Images already preloaded, skipping extraction
I0524 21:05:09.009371 26754 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0524 21:05:09.028399 26754 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
I0524 21:05:09.028421 26754 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
I0524 21:05:09.028426 26754 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
I0524 21:05:09.028435 26754 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
I0524 21:05:09.028439 26754 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I0524 21:05:09.028444 26754 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
I0524 21:05:09.028448 26754 command_runner.go:130] > registry.k8s.io/pause:3.9
I0524 21:05:09.028453 26754 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0524 21:05:09.029374 26754 docker.go:633] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0524 21:05:09.029393 26754 cache_images.go:84] Images are preloaded, skipping loading
I0524 21:05:09.029459 26754 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0524 21:05:09.055902 26754 command_runner.go:130] > cgroupfs
I0524 21:05:09.056855 26754 cni.go:84] Creating CNI manager for ""
I0524 21:05:09.056873 26754 cni.go:136] 1 nodes found, recommending kindnet
I0524 21:05:09.056888 26754 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0524 21:05:09.056904 26754 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-935345 NodeName:multinode-935345 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0524 21:05:09.057066 26754 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.141
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-935345"
kubeletExtraArgs:
node-ip: 192.168.39.141
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.27.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0524 21:05:09.057153 26754 kubeadm.go:971] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-935345 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
[Install]
config:
{KubernetesVersion:v1.27.2 ClusterName:multinode-935345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0524 21:05:09.057216 26754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
I0524 21:05:09.066756 26754 command_runner.go:130] > kubeadm
I0524 21:05:09.066779 26754 command_runner.go:130] > kubectl
I0524 21:05:09.066786 26754 command_runner.go:130] > kubelet
I0524 21:05:09.066802 26754 binaries.go:44] Found k8s binaries, skipping transfer
I0524 21:05:09.066849 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0524 21:05:09.075425 26754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
I0524 21:05:09.091469 26754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0524 21:05:09.107167 26754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
I0524 21:05:09.122437 26754 ssh_runner.go:195] Run: grep 192.168.39.141 control-plane.minikube.internal$ /etc/hosts
I0524 21:05:09.126118 26754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.141 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0524 21:05:09.138147 26754 certs.go:56] Setting up /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345 for IP: 192.168.39.141
I0524 21:05:09.138172 26754 certs.go:190] acquiring lock for shared ca certs: {Name:mkd255b1ad7adc894443e9a2618d4730aa631e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:09.138322 26754 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.key
I0524 21:05:09.138376 26754 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.key
I0524 21:05:09.138425 26754 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.key
I0524 21:05:09.138438 26754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.crt with IP's: []
I0524 21:05:09.239626 26754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.crt ...
I0524 21:05:09.239655 26754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.crt: {Name:mkee8b8bcc29bf370a2acf62b614f8751d832906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:09.239845 26754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.key ...
I0524 21:05:09.239865 26754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.key: {Name:mk41c0960e38c2fe7e21d1ee76d41ce6109e63da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:09.239971 26754 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.key.aae9e419
I0524 21:05:09.239994 26754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.crt.aae9e419 with IP's: [192.168.39.141 10.96.0.1 127.0.0.1 10.0.0.1]
I0524 21:05:09.494750 26754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.crt.aae9e419 ...
I0524 21:05:09.494779 26754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.crt.aae9e419: {Name:mk49b00b54ba6cc3d149e5864806188fa63c73bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:09.494964 26754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.key.aae9e419 ...
I0524 21:05:09.494978 26754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.key.aae9e419: {Name:mk3ea08fa080b09dfd2b609a1ddcf93dd09e8ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:09.495071 26754 certs.go:337] copying /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.crt.aae9e419 -> /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.crt
I0524 21:05:09.495153 26754 certs.go:341] copying /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.key.aae9e419 -> /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.key
I0524 21:05:09.495205 26754 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.key
I0524 21:05:09.495219 26754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.crt with IP's: []
I0524 21:05:09.642166 26754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.crt ...
I0524 21:05:09.642192 26754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.crt: {Name:mka7908cb7acfa7f6d7e5af956f02ff4db10336f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:09.642363 26754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.key ...
I0524 21:05:09.642377 26754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.key: {Name:mka1c28caa8757799e475d646d50ca3767b297da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:09.642472 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0524 21:05:09.642493 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0524 21:05:09.642504 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0524 21:05:09.642517 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0524 21:05:09.642529 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0524 21:05:09.642541 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0524 21:05:09.642573 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0524 21:05:09.642591 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0524 21:05:09.642648 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/15055.pem (1338 bytes)
W0524 21:05:09.642682 26754 certs.go:433] ignoring /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/15055_empty.pem, impossibly tiny 0 bytes
I0524 21:05:09.642694 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem (1679 bytes)
I0524 21:05:09.642714 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem (1078 bytes)
I0524 21:05:09.642737 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem (1123 bytes)
I0524 21:05:09.642758 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem (1679 bytes)
I0524 21:05:09.642795 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem (1708 bytes)
I0524 21:05:09.642822 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem -> /usr/share/ca-certificates/150552.pem
I0524 21:05:09.642837 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0524 21:05:09.642851 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/15055.pem -> /usr/share/ca-certificates/15055.pem
I0524 21:05:09.643374 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0524 21:05:09.666933 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0524 21:05:09.689005 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0524 21:05:09.711356 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0524 21:05:09.732954 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0524 21:05:09.754423 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0524 21:05:09.775421 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0524 21:05:09.796821 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0524 21:05:09.819125 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem --> /usr/share/ca-certificates/150552.pem (1708 bytes)
I0524 21:05:09.841889 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0524 21:05:09.863747 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/certs/15055.pem --> /usr/share/ca-certificates/15055.pem (1338 bytes)
I0524 21:05:09.885742 26754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0524 21:05:09.900954 26754 ssh_runner.go:195] Run: openssl version
I0524 21:05:09.905902 26754 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0524 21:05:09.905989 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150552.pem && ln -fs /usr/share/ca-certificates/150552.pem /etc/ssl/certs/150552.pem"
I0524 21:05:09.915104 26754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150552.pem
I0524 21:05:09.919404 26754 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 24 20:52 /usr/share/ca-certificates/150552.pem
I0524 21:05:09.919467 26754 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 20:52 /usr/share/ca-certificates/150552.pem
I0524 21:05:09.919523 26754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150552.pem
I0524 21:05:09.924552 26754 command_runner.go:130] > 3ec20f2e
I0524 21:05:09.924655 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150552.pem /etc/ssl/certs/3ec20f2e.0"
I0524 21:05:09.933992 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0524 21:05:09.943165 26754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0524 21:05:09.947805 26754 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 24 20:47 /usr/share/ca-certificates/minikubeCA.pem
I0524 21:05:09.947832 26754 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 20:47 /usr/share/ca-certificates/minikubeCA.pem
I0524 21:05:09.947868 26754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0524 21:05:09.952906 26754 command_runner.go:130] > b5213941
I0524 21:05:09.953203 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0524 21:05:09.962370 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15055.pem && ln -fs /usr/share/ca-certificates/15055.pem /etc/ssl/certs/15055.pem"
I0524 21:05:09.971700 26754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15055.pem
I0524 21:05:09.976295 26754 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 24 20:52 /usr/share/ca-certificates/15055.pem
I0524 21:05:09.976316 26754 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 20:52 /usr/share/ca-certificates/15055.pem
I0524 21:05:09.976359 26754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15055.pem
I0524 21:05:09.982019 26754 command_runner.go:130] > 51391683
I0524 21:05:09.982071 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15055.pem /etc/ssl/certs/51391683.0"
I0524 21:05:09.991697 26754 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0524 21:05:09.995628 26754 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0524 21:05:09.995818 26754 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0524 21:05:09.995864 26754 kubeadm.go:404] StartCluster: {Name:multinode-935345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684885407-16572@sha256:1678a360739dac48ad7fdd0fcdfd8f9af43ced0b54ec5cd320e5a35a4c50c733 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.2 ClusterName:multinode-935345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0524 21:05:09.995965 26754 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0524 21:05:10.017609 26754 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0524 21:05:10.027475 26754 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
I0524 21:05:10.027501 26754 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
I0524 21:05:10.027511 26754 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
I0524 21:05:10.027617 26754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0524 21:05:10.036384 26754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0524 21:05:10.045033 26754 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0524 21:05:10.045058 26754 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0524 21:05:10.045069 26754 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0524 21:05:10.045084 26754 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0524 21:05:10.045116 26754 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0524 21:05:10.045147 26754 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0524 21:05:10.377607 26754 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0524 21:05:10.377642 26754 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0524 21:05:10.378456 26754 kubeadm.go:322] W0524 21:05:10.363750 1351 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
I0524 21:05:10.378479 26754 command_runner.go:130] ! W0524 21:05:10.363750 1351 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
I0524 21:05:13.106790 26754 kubeadm.go:322] W0524 21:05:13.092252 1351 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
I0524 21:05:13.106807 26754 command_runner.go:130] ! W0524 21:05:13.092252 1351 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
I0524 21:05:22.107484 26754 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
I0524 21:05:22.107513 26754 command_runner.go:130] > [init] Using Kubernetes version: v1.27.2
I0524 21:05:22.107560 26754 kubeadm.go:322] [preflight] Running pre-flight checks
I0524 21:05:22.107567 26754 command_runner.go:130] > [preflight] Running pre-flight checks
I0524 21:05:22.107660 26754 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0524 21:05:22.107669 26754 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
I0524 21:05:22.107778 26754 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0524 21:05:22.107790 26754 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
I0524 21:05:22.107913 26754 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0524 21:05:22.107925 26754 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0524 21:05:22.107988 26754 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0524 21:05:22.109938 26754 out.go:204] - Generating certificates and keys ...
I0524 21:05:22.108036 26754 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0524 21:05:22.110023 26754 command_runner.go:130] > [certs] Using existing ca certificate authority
I0524 21:05:22.110035 26754 kubeadm.go:322] [certs] Using existing ca certificate authority
I0524 21:05:22.110121 26754 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0524 21:05:22.110137 26754 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0524 21:05:22.110209 26754 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
I0524 21:05:22.110221 26754 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0524 21:05:22.110288 26754 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
I0524 21:05:22.110297 26754 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0524 21:05:22.110364 26754 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
I0524 21:05:22.110373 26754 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0524 21:05:22.110412 26754 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
I0524 21:05:22.110421 26754 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0524 21:05:22.110491 26754 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
I0524 21:05:22.110500 26754 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0524 21:05:22.110692 26754 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-935345] and IPs [192.168.39.141 127.0.0.1 ::1]
I0524 21:05:22.110702 26754 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-935345] and IPs [192.168.39.141 127.0.0.1 ::1]
I0524 21:05:22.110775 26754 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
I0524 21:05:22.110784 26754 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0524 21:05:22.110943 26754 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-935345] and IPs [192.168.39.141 127.0.0.1 ::1]
I0524 21:05:22.110963 26754 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-935345] and IPs [192.168.39.141 127.0.0.1 ::1]
I0524 21:05:22.111042 26754 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
I0524 21:05:22.111054 26754 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0524 21:05:22.111134 26754 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
I0524 21:05:22.111143 26754 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0524 21:05:22.111197 26754 command_runner.go:130] > [certs] Generating "sa" key and public key
I0524 21:05:22.111206 26754 kubeadm.go:322] [certs] Generating "sa" key and public key
I0524 21:05:22.111281 26754 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0524 21:05:22.111292 26754 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0524 21:05:22.111352 26754 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0524 21:05:22.111363 26754 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0524 21:05:22.111420 26754 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0524 21:05:22.111433 26754 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0524 21:05:22.111501 26754 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0524 21:05:22.111509 26754 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0524 21:05:22.111554 26754 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0524 21:05:22.111566 26754 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0524 21:05:22.111703 26754 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0524 21:05:22.111714 26754 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0524 21:05:22.111787 26754 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0524 21:05:22.111794 26754 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0524 21:05:22.111829 26754 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0524 21:05:22.111834 26754 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0524 21:05:22.111885 26754 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0524 21:05:22.111891 26754 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0524 21:05:22.113737 26754 out.go:204] - Booting up control plane ...
I0524 21:05:22.113809 26754 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0524 21:05:22.113816 26754 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0524 21:05:22.113882 26754 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0524 21:05:22.113888 26754 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0524 21:05:22.113942 26754 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0524 21:05:22.113947 26754 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0524 21:05:22.114018 26754 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0524 21:05:22.114024 26754 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0524 21:05:22.114160 26754 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0524 21:05:22.114172 26754 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0524 21:05:22.114241 26754 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.503864 seconds
I0524 21:05:22.114249 26754 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503864 seconds
I0524 21:05:22.114357 26754 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0524 21:05:22.114382 26754 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0524 21:05:22.114500 26754 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0524 21:05:22.114508 26754 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0524 21:05:22.114589 26754 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
I0524 21:05:22.114601 26754 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0524 21:05:22.114824 26754 command_runner.go:130] > [mark-control-plane] Marking the node multinode-935345 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0524 21:05:22.114834 26754 kubeadm.go:322] [mark-control-plane] Marking the node multinode-935345 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0524 21:05:22.114881 26754 command_runner.go:130] > [bootstrap-token] Using token: hdbvlf.a5426sr7dx0g1fbn
I0524 21:05:22.114887 26754 kubeadm.go:322] [bootstrap-token] Using token: hdbvlf.a5426sr7dx0g1fbn
I0524 21:05:22.116491 26754 out.go:204] - Configuring RBAC rules ...
I0524 21:05:22.116601 26754 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0524 21:05:22.116613 26754 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0524 21:05:22.116708 26754 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0524 21:05:22.116717 26754 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0524 21:05:22.116826 26754 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0524 21:05:22.116829 26754 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0524 21:05:22.116995 26754 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0524 21:05:22.117011 26754 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0524 21:05:22.117152 26754 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0524 21:05:22.117164 26754 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0524 21:05:22.117247 26754 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0524 21:05:22.117255 26754 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0524 21:05:22.117406 26754 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0524 21:05:22.117417 26754 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0524 21:05:22.117472 26754 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0524 21:05:22.117480 26754 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0524 21:05:22.117549 26754 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0524 21:05:22.117556 26754 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0524 21:05:22.117559 26754 kubeadm.go:322]
I0524 21:05:22.117623 26754 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
I0524 21:05:22.117632 26754 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0524 21:05:22.117642 26754 kubeadm.go:322]
I0524 21:05:22.117742 26754 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
I0524 21:05:22.117751 26754 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0524 21:05:22.117769 26754 kubeadm.go:322]
I0524 21:05:22.117814 26754 command_runner.go:130] > mkdir -p $HOME/.kube
I0524 21:05:22.117825 26754 kubeadm.go:322] mkdir -p $HOME/.kube
I0524 21:05:22.117907 26754 command_runner.go:130] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0524 21:05:22.117919 26754 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0524 21:05:22.117960 26754 command_runner.go:130] > sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0524 21:05:22.117966 26754 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0524 21:05:22.117970 26754 kubeadm.go:322]
I0524 21:05:22.118015 26754 command_runner.go:130] > Alternatively, if you are the root user, you can run:
I0524 21:05:22.118020 26754 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0524 21:05:22.118024 26754 kubeadm.go:322]
I0524 21:05:22.118074 26754 command_runner.go:130] > export KUBECONFIG=/etc/kubernetes/admin.conf
I0524 21:05:22.118080 26754 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0524 21:05:22.118084 26754 kubeadm.go:322]
I0524 21:05:22.118128 26754 command_runner.go:130] > You should now deploy a pod network to the cluster.
I0524 21:05:22.118135 26754 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0524 21:05:22.118198 26754 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0524 21:05:22.118205 26754 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0524 21:05:22.118261 26754 command_runner.go:130] > https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0524 21:05:22.118267 26754 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0524 21:05:22.118270 26754 kubeadm.go:322]
I0524 21:05:22.118375 26754 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
I0524 21:05:22.118385 26754 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0524 21:05:22.118464 26754 command_runner.go:130] > and service account keys on each node and then running the following as root:
I0524 21:05:22.118474 26754 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0524 21:05:22.118480 26754 kubeadm.go:322]
I0524 21:05:22.118612 26754 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hdbvlf.a5426sr7dx0g1fbn \
I0524 21:05:22.118619 26754 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hdbvlf.a5426sr7dx0g1fbn \
I0524 21:05:22.118750 26754 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:cb9ff1f5bf5bbc94cbf036a3e2c087ff0ad7b74e6aafd0cc8516058c6c6a695c \
I0524 21:05:22.118769 26754 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:cb9ff1f5bf5bbc94cbf036a3e2c087ff0ad7b74e6aafd0cc8516058c6c6a695c \
I0524 21:05:22.118794 26754 command_runner.go:130] > --control-plane
I0524 21:05:22.118801 26754 kubeadm.go:322] --control-plane
I0524 21:05:22.118807 26754 kubeadm.go:322]
I0524 21:05:22.118903 26754 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
I0524 21:05:22.118911 26754 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0524 21:05:22.118915 26754 kubeadm.go:322]
I0524 21:05:22.119030 26754 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hdbvlf.a5426sr7dx0g1fbn \
I0524 21:05:22.119040 26754 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hdbvlf.a5426sr7dx0g1fbn \
I0524 21:05:22.119158 26754 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:cb9ff1f5bf5bbc94cbf036a3e2c087ff0ad7b74e6aafd0cc8516058c6c6a695c
I0524 21:05:22.119179 26754 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:cb9ff1f5bf5bbc94cbf036a3e2c087ff0ad7b74e6aafd0cc8516058c6c6a695c
I0524 21:05:22.119194 26754 cni.go:84] Creating CNI manager for ""
I0524 21:05:22.119214 26754 cni.go:136] 1 nodes found, recommending kindnet
I0524 21:05:22.122177 26754 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0524 21:05:22.123691 26754 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0524 21:05:22.135395 26754 command_runner.go:130] > File: /opt/cni/bin/portmap
I0524 21:05:22.135421 26754 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0524 21:05:22.135431 26754 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0524 21:05:22.135439 26754 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0524 21:05:22.135451 26754 command_runner.go:130] > Access: 2023-05-24 21:04:46.041271815 +0000
I0524 21:05:22.135462 26754 command_runner.go:130] > Modify: 2023-05-24 03:44:28.000000000 +0000
I0524 21:05:22.135472 26754 command_runner.go:130] > Change: 2023-05-24 21:04:44.319271815 +0000
I0524 21:05:22.135481 26754 command_runner.go:130] > Birth: -
I0524 21:05:22.135542 26754 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
I0524 21:05:22.135557 26754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0524 21:05:22.175981 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0524 21:05:23.311707 26754 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
I0524 21:05:23.318590 26754 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
I0524 21:05:23.327426 26754 command_runner.go:130] > serviceaccount/kindnet created
I0524 21:05:23.344118 26754 command_runner.go:130] > daemonset.apps/kindnet created
I0524 21:05:23.347414 26754 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.171401879s)
I0524 21:05:23.347449 26754 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0524 21:05:23.347528 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:23.347546 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb minikube.k8s.io/name=multinode-935345 minikube.k8s.io/updated_at=2023_05_24T21_05_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:23.569639 26754 command_runner.go:130] > node/multinode-935345 labeled
I0524 21:05:23.578133 26754 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
I0524 21:05:23.578199 26754 command_runner.go:130] > -16
I0524 21:05:23.578229 26754 ops.go:34] apiserver oom_adj: -16
I0524 21:05:23.578243 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:23.684328 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:24.185125 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:24.288207 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:24.684609 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:24.776980 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:25.185150 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:25.281685 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:25.684906 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:25.773875 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:26.184513 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:26.275999 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:26.685270 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:26.765285 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:27.184776 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:27.284759 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:27.685398 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:27.788636 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:28.184685 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:28.284227 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:28.685539 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:28.782415 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:29.185039 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:29.302503 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:29.684865 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:29.783718 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:30.185285 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:30.282897 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:30.684884 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:30.781362 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:31.184683 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:31.278110 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:31.684712 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:31.768726 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:32.185107 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:32.289628 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:32.685248 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:32.787789 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:33.185246 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:33.290808 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:33.684577 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:33.892517 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:34.184629 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:34.326252 26754 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0524 21:05:34.684750 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0524 21:05:34.869405 26754 command_runner.go:130] > NAME SECRETS AGE
I0524 21:05:34.869425 26754 command_runner.go:130] > default 0 0s
I0524 21:05:34.873390 26754 kubeadm.go:1076] duration metric: took 11.525916227s to wait for elevateKubeSystemPrivileges.
I0524 21:05:34.873423 26754 kubeadm.go:406] StartCluster complete in 24.877563031s
I0524 21:05:34.873445 26754 settings.go:142] acquiring lock: {Name:mk08353081ea525ce0c1fb7db415a70b9551bc95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:34.873528 26754 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/16572-7844/kubeconfig
I0524 21:05:34.874168 26754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16572-7844/kubeconfig: {Name:mk2bf6c24d4c095b17794f05531ade01d9ad71ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:05:34.874381 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0524 21:05:34.874404 26754 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0524 21:05:34.874472 26754 addons.go:66] Setting storage-provisioner=true in profile "multinode-935345"
I0524 21:05:34.874487 26754 addons.go:66] Setting default-storageclass=true in profile "multinode-935345"
I0524 21:05:34.874496 26754 addons.go:228] Setting addon storage-provisioner=true in "multinode-935345"
I0524 21:05:34.874505 26754 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-935345"
I0524 21:05:34.874561 26754 host.go:66] Checking if "multinode-935345" exists ...
I0524 21:05:34.874624 26754 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:05:34.874715 26754 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/16572-7844/kubeconfig
I0524 21:05:34.874944 26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:05:34.874972 26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:05:34.874975 26754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:05:34.875016 26754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:05:34.874999 26754 kapi.go:59] client config for multinode-935345: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.crt", KeyFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.key", CAFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0524 21:05:34.875846 26754 cert_rotation.go:137] Starting client certificate rotation controller
I0524 21:05:34.876198 26754 round_trippers.go:463] GET https://192.168.39.141:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0524 21:05:34.876214 26754 round_trippers.go:469] Request Headers:
I0524 21:05:34.876226 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:34.876239 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:34.887996 26754 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0524 21:05:34.888016 26754 round_trippers.go:577] Response Headers:
I0524 21:05:34.888026 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:34.888035 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:34.888044 26754 round_trippers.go:580] Content-Length: 291
I0524 21:05:34.888053 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:34 GMT
I0524 21:05:34.888066 26754 round_trippers.go:580] Audit-Id: dccbf294-21d2-4450-bbc2-a3d4675587ab
I0524 21:05:34.888078 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:34.888088 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:34.888402 26754 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ae8d78ae-9ffd-4b14-9e3b-aca097f80b28","resourceVersion":"392","creationTimestamp":"2023-05-24T21:05:21Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0524 21:05:34.888963 26754 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ae8d78ae-9ffd-4b14-9e3b-aca097f80b28","resourceVersion":"392","creationTimestamp":"2023-05-24T21:05:21Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0524 21:05:34.889026 26754 round_trippers.go:463] PUT https://192.168.39.141:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0524 21:05:34.889035 26754 round_trippers.go:469] Request Headers:
I0524 21:05:34.889046 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:34.889058 26754 round_trippers.go:473] Content-Type: application/json
I0524 21:05:34.889072 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:34.889576 26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
I0524 21:05:34.889597 26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38479
I0524 21:05:34.889990 26754 main.go:141] libmachine: () Calling .GetVersion
I0524 21:05:34.890015 26754 main.go:141] libmachine: () Calling .GetVersion
I0524 21:05:34.890473 26754 main.go:141] libmachine: Using API Version 1
I0524 21:05:34.890480 26754 main.go:141] libmachine: Using API Version 1
I0524 21:05:34.890491 26754 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:05:34.890497 26754 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:05:34.890794 26754 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:05:34.890843 26754 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:05:34.890967 26754 main.go:141] libmachine: (multinode-935345) Calling .GetState
I0524 21:05:34.891382 26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:05:34.891429 26754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:05:34.892940 26754 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/16572-7844/kubeconfig
I0524 21:05:34.893247 26754 kapi.go:59] client config for multinode-935345: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.crt", KeyFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.key", CAFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0524 21:05:34.893632 26754 round_trippers.go:463] GET https://192.168.39.141:8443/apis/storage.k8s.io/v1/storageclasses
I0524 21:05:34.893649 26754 round_trippers.go:469] Request Headers:
I0524 21:05:34.893661 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:34.893671 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:34.900477 26754 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0524 21:05:34.900495 26754 round_trippers.go:577] Response Headers:
I0524 21:05:34.900505 26754 round_trippers.go:580] Audit-Id: 22591039-2111-4d3d-96a6-b07949aa019f
I0524 21:05:34.900514 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:34.900522 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:34.900532 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:34.900543 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:34.900552 26754 round_trippers.go:580] Content-Length: 291
I0524 21:05:34.900562 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:34 GMT
I0524 21:05:34.900811 26754 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0524 21:05:34.900837 26754 round_trippers.go:577] Response Headers:
I0524 21:05:34.900847 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:34.900864 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:34.900876 26754 round_trippers.go:580] Content-Length: 109
I0524 21:05:34.900890 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:34 GMT
I0524 21:05:34.900899 26754 round_trippers.go:580] Audit-Id: 40fd2239-05c7-4920-8f3e-3318e88f222e
I0524 21:05:34.900908 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:34.900917 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:34.900816 26754 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ae8d78ae-9ffd-4b14-9e3b-aca097f80b28","resourceVersion":"393","creationTimestamp":"2023-05-24T21:05:21Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
I0524 21:05:34.900987 26754 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"393"},"items":[]}
I0524 21:05:34.901212 26754 addons.go:228] Setting addon default-storageclass=true in "multinode-935345"
I0524 21:05:34.901247 26754 host.go:66] Checking if "multinode-935345" exists ...
I0524 21:05:34.901538 26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:05:34.901567 26754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:05:34.906091 26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
I0524 21:05:34.906441 26754 main.go:141] libmachine: () Calling .GetVersion
I0524 21:05:34.907090 26754 main.go:141] libmachine: Using API Version 1
I0524 21:05:34.907121 26754 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:05:34.907566 26754 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:05:34.907771 26754 main.go:141] libmachine: (multinode-935345) Calling .GetState
I0524 21:05:34.909500 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:05:34.912694 26754 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0524 21:05:34.914597 26754 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0524 21:05:34.914613 26754 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0524 21:05:34.914627 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:05:34.917433 26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37155
I0524 21:05:34.917663 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:05:34.917927 26754 main.go:141] libmachine: () Calling .GetVersion
I0524 21:05:34.918067 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:05:34.918095 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:05:34.918269 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:05:34.918398 26754 main.go:141] libmachine: Using API Version 1
I0524 21:05:34.918422 26754 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:05:34.918441 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:05:34.918587 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:05:34.918723 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa Username:docker}
I0524 21:05:34.918771 26754 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:05:34.919196 26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:05:34.919224 26754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:05:34.933473 26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
I0524 21:05:34.933989 26754 main.go:141] libmachine: () Calling .GetVersion
I0524 21:05:34.934563 26754 main.go:141] libmachine: Using API Version 1
I0524 21:05:34.934591 26754 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:05:34.934963 26754 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:05:34.935201 26754 main.go:141] libmachine: (multinode-935345) Calling .GetState
I0524 21:05:34.936885 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:05:34.937128 26754 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
I0524 21:05:34.937144 26754 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0524 21:05:34.937164 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:05:34.940370 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:05:34.940821 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:05:34.940852 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:05:34.941014 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:05:34.941200 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:05:34.941364 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:05:34.941504 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa Username:docker}
I0524 21:05:35.127345 26754 command_runner.go:130] > apiVersion: v1
I0524 21:05:35.127369 26754 command_runner.go:130] > data:
I0524 21:05:35.127376 26754 command_runner.go:130] > Corefile: |
I0524 21:05:35.127383 26754 command_runner.go:130] > .:53 {
I0524 21:05:35.127389 26754 command_runner.go:130] > errors
I0524 21:05:35.127396 26754 command_runner.go:130] > health {
I0524 21:05:35.127402 26754 command_runner.go:130] > lameduck 5s
I0524 21:05:35.127408 26754 command_runner.go:130] > }
I0524 21:05:35.127414 26754 command_runner.go:130] > ready
I0524 21:05:35.127425 26754 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0524 21:05:35.127434 26754 command_runner.go:130] > pods insecure
I0524 21:05:35.127443 26754 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0524 21:05:35.127453 26754 command_runner.go:130] > ttl 30
I0524 21:05:35.127460 26754 command_runner.go:130] > }
I0524 21:05:35.127469 26754 command_runner.go:130] > prometheus :9153
I0524 21:05:35.127480 26754 command_runner.go:130] > forward . /etc/resolv.conf {
I0524 21:05:35.127490 26754 command_runner.go:130] > max_concurrent 1000
I0524 21:05:35.127498 26754 command_runner.go:130] > }
I0524 21:05:35.127507 26754 command_runner.go:130] > cache 30
I0524 21:05:35.127518 26754 command_runner.go:130] > loop
I0524 21:05:35.127528 26754 command_runner.go:130] > reload
I0524 21:05:35.127537 26754 command_runner.go:130] > loadbalance
I0524 21:05:35.127542 26754 command_runner.go:130] > }
I0524 21:05:35.127552 26754 command_runner.go:130] > kind: ConfigMap
I0524 21:05:35.127558 26754 command_runner.go:130] > metadata:
I0524 21:05:35.127570 26754 command_runner.go:130] > creationTimestamp: "2023-05-24T21:05:21Z"
I0524 21:05:35.127579 26754 command_runner.go:130] > name: coredns
I0524 21:05:35.127588 26754 command_runner.go:130] > namespace: kube-system
I0524 21:05:35.127595 26754 command_runner.go:130] > resourceVersion: "270"
I0524 21:05:35.127605 26754 command_runner.go:130] > uid: a32836dc-fc0f-45d3-a9ef-f8f4c31851f9
I0524 21:05:35.127797 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0524 21:05:35.152714 26754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0524 21:05:35.226011 26754 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0524 21:05:35.401390 26754 round_trippers.go:463] GET https://192.168.39.141:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0524 21:05:35.401418 26754 round_trippers.go:469] Request Headers:
I0524 21:05:35.401429 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:35.401439 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:35.404205 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:35.404226 26754 round_trippers.go:577] Response Headers:
I0524 21:05:35.404233 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:35 GMT
I0524 21:05:35.404239 26754 round_trippers.go:580] Audit-Id: 7437231b-35bf-4ed3-80d0-3a4dd24c24c1
I0524 21:05:35.404244 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:35.404254 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:35.404259 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:35.404264 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:35.404270 26754 round_trippers.go:580] Content-Length: 291
I0524 21:05:35.404288 26754 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ae8d78ae-9ffd-4b14-9e3b-aca097f80b28","resourceVersion":"403","creationTimestamp":"2023-05-24T21:05:21Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0524 21:05:35.404370 26754 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-935345" context rescaled to 1 replicas
I0524 21:05:35.404394 26754 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0524 21:05:35.406590 26754 out.go:177] * Verifying Kubernetes components...
I0524 21:05:35.408274 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0524 21:05:36.282019 26754 command_runner.go:130] > configmap/coredns replaced
I0524 21:05:36.282059 26754 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.154236964s)
I0524 21:05:36.282076 26754 start.go:916] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0524 21:05:36.409309 26754 command_runner.go:130] > serviceaccount/storage-provisioner created
I0524 21:05:36.409339 26754 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
I0524 21:05:36.409350 26754 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0524 21:05:36.409361 26754 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0524 21:05:36.409372 26754 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
I0524 21:05:36.409379 26754 command_runner.go:130] > pod/storage-provisioner created
I0524 21:05:36.409407 26754 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.256656519s)
I0524 21:05:36.409449 26754 main.go:141] libmachine: Making call to close driver server
I0524 21:05:36.409465 26754 main.go:141] libmachine: (multinode-935345) Calling .Close
I0524 21:05:36.409487 26754 command_runner.go:130] > storageclass.storage.k8s.io/standard created
I0524 21:05:36.409514 26754 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.183479042s)
I0524 21:05:36.409539 26754 main.go:141] libmachine: Making call to close driver server
I0524 21:05:36.409556 26754 main.go:141] libmachine: (multinode-935345) Calling .Close
I0524 21:05:36.409555 26754 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.001265538s)
I0524 21:05:36.409841 26754 main.go:141] libmachine: Successfully made call to close driver server
I0524 21:05:36.409855 26754 main.go:141] libmachine: (multinode-935345) DBG | Closing plugin on server side
I0524 21:05:36.409873 26754 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 21:05:36.409890 26754 main.go:141] libmachine: Making call to close driver server
I0524 21:05:36.409891 26754 main.go:141] libmachine: (multinode-935345) DBG | Closing plugin on server side
I0524 21:05:36.409902 26754 main.go:141] libmachine: (multinode-935345) Calling .Close
I0524 21:05:36.409937 26754 main.go:141] libmachine: Successfully made call to close driver server
I0524 21:05:36.409954 26754 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 21:05:36.409994 26754 main.go:141] libmachine: Making call to close driver server
I0524 21:05:36.410011 26754 main.go:141] libmachine: (multinode-935345) Calling .Close
I0524 21:05:36.410021 26754 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/16572-7844/kubeconfig
I0524 21:05:36.410151 26754 main.go:141] libmachine: Successfully made call to close driver server
I0524 21:05:36.410179 26754 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 21:05:36.410190 26754 main.go:141] libmachine: (multinode-935345) DBG | Closing plugin on server side
I0524 21:05:36.410248 26754 main.go:141] libmachine: Successfully made call to close driver server
I0524 21:05:36.410262 26754 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 21:05:36.410274 26754 main.go:141] libmachine: Making call to close driver server
I0524 21:05:36.410289 26754 main.go:141] libmachine: (multinode-935345) Calling .Close
I0524 21:05:36.410313 26754 kapi.go:59] client config for multinode-935345: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.crt", KeyFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.key", CAFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0524 21:05:36.410500 26754 main.go:141] libmachine: Successfully made call to close driver server
I0524 21:05:36.410519 26754 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 21:05:36.410638 26754 node_ready.go:35] waiting up to 6m0s for node "multinode-935345" to be "Ready" ...
I0524 21:05:36.412588 26754 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0524 21:05:36.410710 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:36.414671 26754 round_trippers.go:469] Request Headers:
I0524 21:05:36.414681 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:36.414688 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:36.414690 26754 addons.go:499] enable addons completed in 1.540274945s: enabled=[storage-provisioner default-storageclass]
I0524 21:05:36.418229 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:36.418249 26754 round_trippers.go:577] Response Headers:
I0524 21:05:36.418258 26754 round_trippers.go:580] Audit-Id: f98f0209-4886-4ef9-9141-f7591b499e6f
I0524 21:05:36.418266 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:36.418273 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:36.418284 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:36.418297 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:36.418308 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:36 GMT
I0524 21:05:36.418412 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:36.919831 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:36.919855 26754 round_trippers.go:469] Request Headers:
I0524 21:05:36.919863 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:36.919869 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:36.922732 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:36.922757 26754 round_trippers.go:577] Response Headers:
I0524 21:05:36.922768 26754 round_trippers.go:580] Audit-Id: a0d78d35-6f21-4deb-8999-1ab833adf6f7
I0524 21:05:36.922780 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:36.922792 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:36.922804 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:36.922812 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:36.922824 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:36 GMT
I0524 21:05:36.923250 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:37.420047 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:37.420076 26754 round_trippers.go:469] Request Headers:
I0524 21:05:37.420088 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:37.420098 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:37.422911 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:37.422936 26754 round_trippers.go:577] Response Headers:
I0524 21:05:37.422944 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:37.422949 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:37.422957 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:37.422963 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:37.422969 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:37 GMT
I0524 21:05:37.422974 26754 round_trippers.go:580] Audit-Id: 55d742ed-fb73-499c-9163-bc2b19991bf1
I0524 21:05:37.423076 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:37.919585 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:37.919606 26754 round_trippers.go:469] Request Headers:
I0524 21:05:37.919614 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:37.919620 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:37.922324 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:37.922344 26754 round_trippers.go:577] Response Headers:
I0524 21:05:37.922351 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:37.922357 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:37.922363 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:37 GMT
I0524 21:05:37.922368 26754 round_trippers.go:580] Audit-Id: 00b2ef77-c67c-4988-baa3-430851a2edc5
I0524 21:05:37.922373 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:37.922378 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:37.922460 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:38.420104 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:38.420127 26754 round_trippers.go:469] Request Headers:
I0524 21:05:38.420134 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:38.420141 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:38.423224 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:38.423241 26754 round_trippers.go:577] Response Headers:
I0524 21:05:38.423250 26754 round_trippers.go:580] Audit-Id: 1dbb9e88-969c-4eaf-8263-ae5e24e2044b
I0524 21:05:38.423255 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:38.423261 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:38.423266 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:38.423272 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:38.423282 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:38 GMT
I0524 21:05:38.423781 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:38.424087 26754 node_ready.go:58] node "multinode-935345" has status "Ready":"False"
I0524 21:05:38.919762 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:38.919781 26754 round_trippers.go:469] Request Headers:
I0524 21:05:38.919794 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:38.919803 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:38.923974 26754 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0524 21:05:38.923996 26754 round_trippers.go:577] Response Headers:
I0524 21:05:38.924007 26754 round_trippers.go:580] Audit-Id: dc2b27ef-d638-4a37-af08-d62e910cb72b
I0524 21:05:38.924016 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:38.924023 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:38.924031 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:38.924040 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:38.924051 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:38 GMT
I0524 21:05:38.924244 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:39.419814 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:39.419833 26754 round_trippers.go:469] Request Headers:
I0524 21:05:39.419847 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:39.419865 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:39.422329 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:39.422349 26754 round_trippers.go:577] Response Headers:
I0524 21:05:39.422359 26754 round_trippers.go:580] Audit-Id: 24386d78-2d5a-4907-bc7e-95ea77aad508
I0524 21:05:39.422368 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:39.422380 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:39.422390 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:39.422402 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:39.422411 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:39 GMT
I0524 21:05:39.422871 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:39.920001 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:39.920019 26754 round_trippers.go:469] Request Headers:
I0524 21:05:39.920032 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:39.920039 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:39.923297 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:39.923318 26754 round_trippers.go:577] Response Headers:
I0524 21:05:39.923326 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:39.923339 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:39.923348 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:39.923362 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:39.923371 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:39 GMT
I0524 21:05:39.923381 26754 round_trippers.go:580] Audit-Id: d4803226-ba86-4790-bbc0-a958ea1725ec
I0524 21:05:39.923801 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:40.419632 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:40.419672 26754 round_trippers.go:469] Request Headers:
I0524 21:05:40.419680 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:40.419686 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:40.422473 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:40.422497 26754 round_trippers.go:577] Response Headers:
I0524 21:05:40.422506 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:40 GMT
I0524 21:05:40.422512 26754 round_trippers.go:580] Audit-Id: f02167a9-b1b4-4596-b1aa-a0c7f35cd357
I0524 21:05:40.422517 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:40.422522 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:40.422528 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:40.422534 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:40.423265 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:40.919993 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:40.920017 26754 round_trippers.go:469] Request Headers:
I0524 21:05:40.920025 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:40.920031 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:40.922956 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:40.922977 26754 round_trippers.go:577] Response Headers:
I0524 21:05:40.922984 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:40 GMT
I0524 21:05:40.922990 26754 round_trippers.go:580] Audit-Id: 5e4b8266-8bd2-4003-a3e9-3069e7a0e89e
I0524 21:05:40.922995 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:40.923000 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:40.923007 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:40.923016 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:40.923154 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:40.923433 26754 node_ready.go:58] node "multinode-935345" has status "Ready":"False"
I0524 21:05:41.419804 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:41.419826 26754 round_trippers.go:469] Request Headers:
I0524 21:05:41.419833 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:41.419839 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:41.429748 26754 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0524 21:05:41.429768 26754 round_trippers.go:577] Response Headers:
I0524 21:05:41.429779 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:41.429788 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:41.429796 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:41.429805 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:41 GMT
I0524 21:05:41.429814 26754 round_trippers.go:580] Audit-Id: 2685de0f-0229-4834-a722-4d1ca8f14252
I0524 21:05:41.429823 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:41.430217 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:41.919932 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:41.919961 26754 round_trippers.go:469] Request Headers:
I0524 21:05:41.919974 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:41.919984 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:41.923254 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:41.923274 26754 round_trippers.go:577] Response Headers:
I0524 21:05:41.923284 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:41.923292 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:41 GMT
I0524 21:05:41.923301 26754 round_trippers.go:580] Audit-Id: 26345a90-ce6e-4bca-bc74-ff9984ec6872
I0524 21:05:41.923309 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:41.923317 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:41.923331 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:41.923622 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:42.419245 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:42.419264 26754 round_trippers.go:469] Request Headers:
I0524 21:05:42.419272 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:42.419279 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:42.423701 26754 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0524 21:05:42.423723 26754 round_trippers.go:577] Response Headers:
I0524 21:05:42.423733 26754 round_trippers.go:580] Audit-Id: 7cb11d84-0551-4680-a1a8-bd78c4e1879c
I0524 21:05:42.423739 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:42.423745 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:42.423750 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:42.423755 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:42.423761 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:42 GMT
I0524 21:05:42.424324 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:42.920012 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:42.920034 26754 round_trippers.go:469] Request Headers:
I0524 21:05:42.920042 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:42.920048 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:42.923438 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:42.923455 26754 round_trippers.go:577] Response Headers:
I0524 21:05:42.923462 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:42.923467 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:42.923472 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:42 GMT
I0524 21:05:42.923478 26754 round_trippers.go:580] Audit-Id: ae798e35-de2d-44bc-bfd5-81eb1d3c9d19
I0524 21:05:42.923483 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:42.923489 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:42.923864 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:42.924137 26754 node_ready.go:58] node "multinode-935345" has status "Ready":"False"
I0524 21:05:43.419488 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:43.419510 26754 round_trippers.go:469] Request Headers:
I0524 21:05:43.419518 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:43.419525 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:43.422641 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:43.422659 26754 round_trippers.go:577] Response Headers:
I0524 21:05:43.422666 26754 round_trippers.go:580] Audit-Id: 4ed0e8b3-a9fa-4c0f-a9c7-e17a81eca70f
I0524 21:05:43.422671 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:43.422678 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:43.422686 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:43.422694 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:43.422703 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:43 GMT
I0524 21:05:43.423443 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"344","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
I0524 21:05:43.919760 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:43.919780 26754 round_trippers.go:469] Request Headers:
I0524 21:05:43.919788 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:43.919795 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:43.923527 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:43.923546 26754 round_trippers.go:577] Response Headers:
I0524 21:05:43.923556 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:43.923561 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:43.923566 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:43.923572 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:43.923585 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:43 GMT
I0524 21:05:43.923590 26754 round_trippers.go:580] Audit-Id: 1c5a3c73-dc40-4ef0-80ac-b40c30cd53f6
I0524 21:05:43.923824 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:43.924087 26754 node_ready.go:49] node "multinode-935345" has status "Ready":"True"
I0524 21:05:43.924101 26754 node_ready.go:38] duration metric: took 7.513445246s waiting for node "multinode-935345" to be "Ready" ...
I0524 21:05:43.924109 26754 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0524 21:05:43.924158 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
I0524 21:05:43.924166 26754 round_trippers.go:469] Request Headers:
I0524 21:05:43.924172 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:43.924178 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:43.928464 26754 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0524 21:05:43.928486 26754 round_trippers.go:577] Response Headers:
I0524 21:05:43.928497 26754 round_trippers.go:580] Audit-Id: 9e53682b-fe10-4aab-9917-2a2646b48232
I0524 21:05:43.928506 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:43.928513 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:43.928519 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:43.928524 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:43.928529 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:43 GMT
I0524 21:05:43.929296 26754 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"441","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
I0524 21:05:43.933628 26754 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-b58rt" in "kube-system" namespace to be "Ready" ...
I0524 21:05:43.933687 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-b58rt
I0524 21:05:43.933695 26754 round_trippers.go:469] Request Headers:
I0524 21:05:43.933702 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:43.933708 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:43.935956 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:43.935971 26754 round_trippers.go:577] Response Headers:
I0524 21:05:43.935977 26754 round_trippers.go:580] Audit-Id: 8aedba4c-0118-4516-9bc9-32be92f4f206
I0524 21:05:43.935983 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:43.935989 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:43.935998 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:43.936006 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:43.936016 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:43 GMT
I0524 21:05:43.936265 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"441","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0524 21:05:43.936614 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:43.936626 26754 round_trippers.go:469] Request Headers:
I0524 21:05:43.936633 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:43.936639 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:43.938738 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:43.938752 26754 round_trippers.go:577] Response Headers:
I0524 21:05:43.938758 26754 round_trippers.go:580] Audit-Id: 3d2b8097-3d3f-4b4e-829d-8f271942ee84
I0524 21:05:43.938766 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:43.938775 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:43.938784 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:43.938793 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:43.938805 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:43 GMT
I0524 21:05:43.938923 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:44.439965 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-b58rt
I0524 21:05:44.439999 26754 round_trippers.go:469] Request Headers:
I0524 21:05:44.440013 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:44.440020 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:44.444664 26754 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0524 21:05:44.444682 26754 round_trippers.go:577] Response Headers:
I0524 21:05:44.444693 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:44.444700 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:44.444708 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:44.444716 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:44 GMT
I0524 21:05:44.444725 26754 round_trippers.go:580] Audit-Id: 90a8d54a-3f11-451e-83da-105690932b92
I0524 21:05:44.444735 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:44.444876 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"441","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0524 21:05:44.445274 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:44.445285 26754 round_trippers.go:469] Request Headers:
I0524 21:05:44.445293 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:44.445299 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:44.447693 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:44.447705 26754 round_trippers.go:577] Response Headers:
I0524 21:05:44.447711 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:44.447716 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:44.447722 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:44 GMT
I0524 21:05:44.447727 26754 round_trippers.go:580] Audit-Id: e3b94d95-0eaf-44da-a347-888e5dd6db24
I0524 21:05:44.447732 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:44.447737 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:44.447994 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:44.940380 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-b58rt
I0524 21:05:44.940405 26754 round_trippers.go:469] Request Headers:
I0524 21:05:44.940413 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:44.940420 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:44.943158 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:44.943183 26754 round_trippers.go:577] Response Headers:
I0524 21:05:44.943193 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:44.943202 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:44 GMT
I0524 21:05:44.943207 26754 round_trippers.go:580] Audit-Id: 8e7aede5-4bb7-4863-9029-c83b97c10d52
I0524 21:05:44.943213 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:44.943218 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:44.943227 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:44.943336 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"441","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0524 21:05:44.943796 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:44.943808 26754 round_trippers.go:469] Request Headers:
I0524 21:05:44.943815 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:44.943821 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:44.946156 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:44.946173 26754 round_trippers.go:577] Response Headers:
I0524 21:05:44.946180 26754 round_trippers.go:580] Audit-Id: eb79dee2-5d72-4e52-befa-aeff3c5cd05e
I0524 21:05:44.946185 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:44.946190 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:44.946196 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:44.946201 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:44.946206 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:44 GMT
I0524 21:05:44.946347 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:45.439759 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-b58rt
I0524 21:05:45.439778 26754 round_trippers.go:469] Request Headers:
I0524 21:05:45.439786 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:45.439793 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:45.442516 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:45.442537 26754 round_trippers.go:577] Response Headers:
I0524 21:05:45.442565 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:45.442575 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:45.442583 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:45.442591 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:45.442604 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:45 GMT
I0524 21:05:45.442613 26754 round_trippers.go:580] Audit-Id: 0e89dac1-cb97-4896-a8ec-3aa7f4fa8113
I0524 21:05:45.443016 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"441","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0524 21:05:45.443443 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:45.443455 26754 round_trippers.go:469] Request Headers:
I0524 21:05:45.443464 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:45.443474 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:45.445719 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:45.445734 26754 round_trippers.go:577] Response Headers:
I0524 21:05:45.445741 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:45.445750 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:45.445755 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:45 GMT
I0524 21:05:45.445761 26754 round_trippers.go:580] Audit-Id: f7bdcee9-5d51-45dc-9711-778edb82bcbd
I0524 21:05:45.445766 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:45.445771 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:45.446048 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:45.939659 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-b58rt
I0524 21:05:45.939680 26754 round_trippers.go:469] Request Headers:
I0524 21:05:45.939688 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:45.939694 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:45.942196 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:45.942212 26754 round_trippers.go:577] Response Headers:
I0524 21:05:45.942219 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:45 GMT
I0524 21:05:45.942225 26754 round_trippers.go:580] Audit-Id: 4c034081-a8a0-4f53-bd7e-fc8a0120b137
I0524 21:05:45.942230 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:45.942235 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:45.942240 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:45.942245 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:45.942736 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"441","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
I0524 21:05:45.943142 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:45.943155 26754 round_trippers.go:469] Request Headers:
I0524 21:05:45.943162 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:45.943169 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:45.945287 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:45.945303 26754 round_trippers.go:577] Response Headers:
I0524 21:05:45.945309 26754 round_trippers.go:580] Audit-Id: b61e5be4-893d-4e18-8e16-38e52e60647a
I0524 21:05:45.945315 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:45.945320 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:45.945331 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:45.945342 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:45.945353 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:45 GMT
I0524 21:05:45.945660 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:45.945937 26754 pod_ready.go:102] pod "coredns-5d78c9869d-b58rt" in "kube-system" namespace has status "Ready":"False"
I0524 21:05:46.440352 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-b58rt
I0524 21:05:46.440375 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.440384 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.440400 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.443434 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:46.443451 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.443458 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.443464 26754 round_trippers.go:580] Audit-Id: 8423122f-abe9-48b4-ab8f-33d58f0084c2
I0524 21:05:46.443469 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.443474 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.443486 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.443494 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.443969 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"451","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
I0524 21:05:46.444415 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:46.444427 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.444434 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.444440 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.446422 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:05:46.446440 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.446453 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.446460 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.446466 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.446471 26754 round_trippers.go:580] Audit-Id: 50cb3ef7-348a-4742-adaa-9c3bb007eab4
I0524 21:05:46.446476 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.446481 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.446777 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:46.447085 26754 pod_ready.go:92] pod "coredns-5d78c9869d-b58rt" in "kube-system" namespace has status "Ready":"True"
I0524 21:05:46.447100 26754 pod_ready.go:81] duration metric: took 2.513451681s waiting for pod "coredns-5d78c9869d-b58rt" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.447107 26754 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.447147 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-935345
I0524 21:05:46.447154 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.447160 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.447166 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.449227 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:46.449242 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.449249 26754 round_trippers.go:580] Audit-Id: 82d93bfa-1e3e-4e7c-845b-4df062368898
I0524 21:05:46.449254 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.449260 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.449265 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.449270 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.449275 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.449659 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-935345","namespace":"kube-system","uid":"e8724e83-c511-481d-a9ca-8c0943c03817","resourceVersion":"426","creationTimestamp":"2023-05-24T21:05:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.141:2379","kubernetes.io/config.hash":"5dc37d6c21cc4c3942f55949a2300f81","kubernetes.io/config.mirror":"5dc37d6c21cc4c3942f55949a2300f81","kubernetes.io/config.seen":"2023-05-24T21:05:22.144619745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
I0524 21:05:46.449985 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:46.449997 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.450003 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.450009 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.451927 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:05:46.451942 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.451952 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.451959 26754 round_trippers.go:580] Audit-Id: 9dc8228e-1e3e-4fa4-b58c-c96a7ac0d063
I0524 21:05:46.451968 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.451976 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.451984 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.451993 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.452177 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:46.452439 26754 pod_ready.go:92] pod "etcd-multinode-935345" in "kube-system" namespace has status "Ready":"True"
I0524 21:05:46.452452 26754 pod_ready.go:81] duration metric: took 5.339898ms waiting for pod "etcd-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.452465 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.452502 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-935345
I0524 21:05:46.452511 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.452518 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.452524 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.454270 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:05:46.454282 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.454288 26754 round_trippers.go:580] Audit-Id: 1baed539-7fbe-4c68-ad30-0dffaaf3f126
I0524 21:05:46.454293 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.454298 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.454303 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.454308 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.454313 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.454607 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-935345","namespace":"kube-system","uid":"4e1d7b9b-2385-4595-80da-1cbc3e9804e6","resourceVersion":"427","creationTimestamp":"2023-05-24T21:05:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.141:8443","kubernetes.io/config.hash":"cbb29c660ce956b2d6c62dd44f97e9c5","kubernetes.io/config.mirror":"cbb29c660ce956b2d6c62dd44f97e9c5","kubernetes.io/config.seen":"2023-05-24T21:05:22.144620796Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
I0524 21:05:46.455098 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:46.455114 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.455121 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.455128 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.456712 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:05:46.456728 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.456737 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.456745 26754 round_trippers.go:580] Audit-Id: 0fb98e73-e3e1-4bab-bb80-a4942e41cbb2
I0524 21:05:46.456753 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.456761 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.456770 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.456779 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.456928 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:46.457195 26754 pod_ready.go:92] pod "kube-apiserver-multinode-935345" in "kube-system" namespace has status "Ready":"True"
I0524 21:05:46.457208 26754 pod_ready.go:81] duration metric: took 4.736309ms waiting for pod "kube-apiserver-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.457216 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.457270 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-935345
I0524 21:05:46.457282 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.457293 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.457305 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.459361 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:46.459373 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.459379 26754 round_trippers.go:580] Audit-Id: 7855a430-0737-4a2d-a3fc-6f4cc1940ded
I0524 21:05:46.459384 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.459389 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.459394 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.459399 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.459411 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.459907 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-935345","namespace":"kube-system","uid":"58257693-9611-4d6d-a90d-c54c61f9bdb6","resourceVersion":"428","creationTimestamp":"2023-05-24T21:05:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ee83c04bd6438a0f47b0e07ad320ac0","kubernetes.io/config.mirror":"4ee83c04bd6438a0f47b0e07ad320ac0","kubernetes.io/config.seen":"2023-05-24T21:05:22.144621666Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
I0524 21:05:46.460206 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:46.460215 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.460242 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.460248 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.461997 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:05:46.462010 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.462016 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.462022 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.462028 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.462037 26754 round_trippers.go:580] Audit-Id: f993c11e-4aaf-4f4a-9c4e-f725b74c2d54
I0524 21:05:46.462048 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.462059 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.462164 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:46.462362 26754 pod_ready.go:92] pod "kube-controller-manager-multinode-935345" in "kube-system" namespace has status "Ready":"True"
I0524 21:05:46.462372 26754 pod_ready.go:81] duration metric: took 5.149927ms waiting for pod "kube-controller-manager-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.462379 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5gdf" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.462408 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5gdf
I0524 21:05:46.462421 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.462428 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.462436 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.464465 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:46.464477 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.464483 26754 round_trippers.go:580] Audit-Id: a665de7b-3823-413b-a572-068666158c2a
I0524 21:05:46.464488 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.464494 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.464499 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.464506 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.464514 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.464841 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5gdf","generateName":"kube-proxy-","namespace":"kube-system","uid":"5f24e81d-a75c-49a0-919b-2266a2a0fd94","resourceVersion":"420","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"892e89de-a123-493e-b092-5b426c6044d4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"892e89de-a123-493e-b092-5b426c6044d4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5535 chars]
I0524 21:05:46.465176 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:46.465188 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.465195 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.465201 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.466974 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:05:46.466991 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.466999 26754 round_trippers.go:580] Audit-Id: c2ac66bc-a5b5-443b-9ec1-2351d3a83674
I0524 21:05:46.467007 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.467015 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.467024 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.467033 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.467045 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.467189 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:46.467442 26754 pod_ready.go:92] pod "kube-proxy-j5gdf" in "kube-system" namespace has status "Ready":"True"
I0524 21:05:46.467459 26754 pod_ready.go:81] duration metric: took 5.074906ms waiting for pod "kube-proxy-j5gdf" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.467465 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.640821 26754 request.go:628] Waited for 173.307208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-935345
I0524 21:05:46.640880 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-935345
I0524 21:05:46.640884 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.640892 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.640898 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.643769 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:46.643792 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.643803 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.643812 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.643818 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.643824 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.643829 26754 round_trippers.go:580] Audit-Id: 1c5c0142-8530-4e54-9997-1aa381400485
I0524 21:05:46.643834 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.643993 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-935345","namespace":"kube-system","uid":"389454ac-5dbb-4456-a505-cf6a21fb81d4","resourceVersion":"429","creationTimestamp":"2023-05-24T21:05:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"541a493605483a25e1c768fdd305f2b9","kubernetes.io/config.mirror":"541a493605483a25e1c768fdd305f2b9","kubernetes.io/config.seen":"2023-05-24T21:05:22.144616107Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
I0524 21:05:46.840691 26754 request.go:628] Waited for 196.355455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:46.840757 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:05:46.840761 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.840768 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.840775 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.843457 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:46.843488 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.843498 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.843512 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.843521 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.843532 26754 round_trippers.go:580] Audit-Id: 9abaeafe-58fc-49c0-9549-93e19b71b155
I0524 21:05:46.843537 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.843542 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.844019 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
I0524 21:05:46.844296 26754 pod_ready.go:92] pod "kube-scheduler-multinode-935345" in "kube-system" namespace has status "Ready":"True"
I0524 21:05:46.844312 26754 pod_ready.go:81] duration metric: took 376.839889ms waiting for pod "kube-scheduler-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:05:46.844325 26754 pod_ready.go:38] duration metric: took 2.920205971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0524 21:05:46.844350 26754 api_server.go:52] waiting for apiserver process to appear ...
I0524 21:05:46.844414 26754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0524 21:05:46.857083 26754 command_runner.go:130] > 1792
I0524 21:05:46.857327 26754 api_server.go:72] duration metric: took 11.452910897s to wait for apiserver process to appear ...
I0524 21:05:46.857338 26754 api_server.go:88] waiting for apiserver healthz status ...
I0524 21:05:46.857352 26754 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
I0524 21:05:46.862394 26754 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
ok
I0524 21:05:46.862440 26754 round_trippers.go:463] GET https://192.168.39.141:8443/version
I0524 21:05:46.862452 26754 round_trippers.go:469] Request Headers:
I0524 21:05:46.862463 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:46.862473 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:46.863449 26754 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0524 21:05:46.863463 26754 round_trippers.go:577] Response Headers:
I0524 21:05:46.863469 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:46.863484 26754 round_trippers.go:580] Content-Length: 263
I0524 21:05:46.863489 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:46 GMT
I0524 21:05:46.863496 26754 round_trippers.go:580] Audit-Id: c55654eb-acb1-4d72-97dd-1794cfc95f80
I0524 21:05:46.863507 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:46.863522 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:46.863531 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:46.863544 26754 request.go:1188] Response Body: {
"major": "1",
"minor": "27",
"gitVersion": "v1.27.2",
"gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
"gitTreeState": "clean",
"buildDate": "2023-05-17T14:13:28Z",
"goVersion": "go1.20.4",
"compiler": "gc",
"platform": "linux/amd64"
}
I0524 21:05:46.863608 26754 api_server.go:141] control plane version: v1.27.2
I0524 21:05:46.863625 26754 api_server.go:131] duration metric: took 6.281229ms to wait for apiserver health ...
I0524 21:05:46.863630 26754 system_pods.go:43] waiting for kube-system pods to appear ...
I0524 21:05:47.041060 26754 request.go:628] Waited for 177.349418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
I0524 21:05:47.041118 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
I0524 21:05:47.041124 26754 round_trippers.go:469] Request Headers:
I0524 21:05:47.041135 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:47.041145 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:47.044997 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:47.045018 26754 round_trippers.go:577] Response Headers:
I0524 21:05:47.045025 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:47.045030 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:47 GMT
I0524 21:05:47.045039 26754 round_trippers.go:580] Audit-Id: 72eff441-8c18-46ca-bcc2-565d2ba70890
I0524 21:05:47.045045 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:47.045050 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:47.045056 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:47.046736 26754 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"451","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
I0524 21:05:47.048353 26754 system_pods.go:59] 8 kube-system pods found
I0524 21:05:47.048376 26754 system_pods.go:61] "coredns-5d78c9869d-b58rt" [96aeb17f-d77a-4748-a3fb-a5f21e810413] Running
I0524 21:05:47.048381 26754 system_pods.go:61] "etcd-multinode-935345" [e8724e83-c511-481d-a9ca-8c0943c03817] Running
I0524 21:05:47.048385 26754 system_pods.go:61] "kindnet-lkcmf" [9ee8613c-4807-4a7d-8664-d2cd5f512863] Running
I0524 21:05:47.048391 26754 system_pods.go:61] "kube-apiserver-multinode-935345" [4e1d7b9b-2385-4595-80da-1cbc3e9804e6] Running
I0524 21:05:47.048398 26754 system_pods.go:61] "kube-controller-manager-multinode-935345" [58257693-9611-4d6d-a90d-c54c61f9bdb6] Running
I0524 21:05:47.048416 26754 system_pods.go:61] "kube-proxy-j5gdf" [5f24e81d-a75c-49a0-919b-2266a2a0fd94] Running
I0524 21:05:47.048428 26754 system_pods.go:61] "kube-scheduler-multinode-935345" [389454ac-5dbb-4456-a505-cf6a21fb81d4] Running
I0524 21:05:47.048434 26754 system_pods.go:61] "storage-provisioner" [0cf76689-90ee-4e10-80d0-67519768f5a1] Running
I0524 21:05:47.048440 26754 system_pods.go:74] duration metric: took 184.805742ms to wait for pod list to return data ...
I0524 21:05:47.048447 26754 default_sa.go:34] waiting for default service account to be created ...
I0524 21:05:47.240924 26754 request.go:628] Waited for 192.392893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
I0524 21:05:47.240989 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/default/serviceaccounts
I0524 21:05:47.240996 26754 round_trippers.go:469] Request Headers:
I0524 21:05:47.241007 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:47.241017 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:47.243649 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:47.243671 26754 round_trippers.go:577] Response Headers:
I0524 21:05:47.243678 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:47.243684 26754 round_trippers.go:580] Content-Length: 261
I0524 21:05:47.243689 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:47 GMT
I0524 21:05:47.243694 26754 round_trippers.go:580] Audit-Id: 89d59aa7-5b9f-4ff5-ae65-71156bff69b2
I0524 21:05:47.243699 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:47.243704 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:47.243709 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:47.243728 26754 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e169182d-933b-4d0d-bb79-bee9ffdc232f","resourceVersion":"369","creationTimestamp":"2023-05-24T21:05:34Z"}}]}
I0524 21:05:47.243936 26754 default_sa.go:45] found service account: "default"
I0524 21:05:47.243959 26754 default_sa.go:55] duration metric: took 195.503186ms for default service account to be created ...
I0524 21:05:47.243967 26754 system_pods.go:116] waiting for k8s-apps to be running ...
I0524 21:05:47.440382 26754 request.go:628] Waited for 196.337749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
I0524 21:05:47.440444 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
I0524 21:05:47.440449 26754 round_trippers.go:469] Request Headers:
I0524 21:05:47.440456 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:47.440463 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:47.444050 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:05:47.444067 26754 round_trippers.go:577] Response Headers:
I0524 21:05:47.444074 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:47 GMT
I0524 21:05:47.444080 26754 round_trippers.go:580] Audit-Id: b98f3be9-9b64-481b-af16-25f0a4107c48
I0524 21:05:47.444085 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:47.444097 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:47.444105 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:47.444112 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:47.445194 26754 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"451","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
I0524 21:05:47.446819 26754 system_pods.go:86] 8 kube-system pods found
I0524 21:05:47.446835 26754 system_pods.go:89] "coredns-5d78c9869d-b58rt" [96aeb17f-d77a-4748-a3fb-a5f21e810413] Running
I0524 21:05:47.446840 26754 system_pods.go:89] "etcd-multinode-935345" [e8724e83-c511-481d-a9ca-8c0943c03817] Running
I0524 21:05:47.446846 26754 system_pods.go:89] "kindnet-lkcmf" [9ee8613c-4807-4a7d-8664-d2cd5f512863] Running
I0524 21:05:47.446852 26754 system_pods.go:89] "kube-apiserver-multinode-935345" [4e1d7b9b-2385-4595-80da-1cbc3e9804e6] Running
I0524 21:05:47.446862 26754 system_pods.go:89] "kube-controller-manager-multinode-935345" [58257693-9611-4d6d-a90d-c54c61f9bdb6] Running
I0524 21:05:47.446869 26754 system_pods.go:89] "kube-proxy-j5gdf" [5f24e81d-a75c-49a0-919b-2266a2a0fd94] Running
I0524 21:05:47.446878 26754 system_pods.go:89] "kube-scheduler-multinode-935345" [389454ac-5dbb-4456-a505-cf6a21fb81d4] Running
I0524 21:05:47.446884 26754 system_pods.go:89] "storage-provisioner" [0cf76689-90ee-4e10-80d0-67519768f5a1] Running
I0524 21:05:47.446891 26754 system_pods.go:126] duration metric: took 202.91998ms to wait for k8s-apps to be running ...
I0524 21:05:47.446901 26754 system_svc.go:44] waiting for kubelet service to be running ....
I0524 21:05:47.446941 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0524 21:05:47.460315 26754 system_svc.go:56] duration metric: took 13.407243ms WaitForService to wait for kubelet.
I0524 21:05:47.460335 26754 kubeadm.go:581] duration metric: took 12.055919205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0524 21:05:47.460351 26754 node_conditions.go:102] verifying NodePressure condition ...
I0524 21:05:47.640798 26754 request.go:628] Waited for 180.356344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
I0524 21:05:47.640850 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
I0524 21:05:47.640854 26754 round_trippers.go:469] Request Headers:
I0524 21:05:47.640862 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:05:47.640868 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:05:47.643575 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:05:47.643594 26754 round_trippers.go:577] Response Headers:
I0524 21:05:47.643601 26754 round_trippers.go:580] Audit-Id: 6f9e4ff7-75bf-4452-9bbc-5cef478c63f9
I0524 21:05:47.643607 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:05:47.643612 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:05:47.643617 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:05:47.643632 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:05:47.643637 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:05:47 GMT
I0524 21:05:47.644823 26754 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"435","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
I0524 21:05:47.645128 26754 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0524 21:05:47.645149 26754 node_conditions.go:123] node cpu capacity is 2
I0524 21:05:47.645164 26754 node_conditions.go:105] duration metric: took 184.808668ms to run NodePressure ...
I0524 21:05:47.645177 26754 start.go:228] waiting for startup goroutines ...
I0524 21:05:47.645185 26754 start.go:233] waiting for cluster config update ...
I0524 21:05:47.645197 26754 start.go:242] writing updated cluster config ...
I0524 21:05:47.647969 26754 out.go:177]
I0524 21:05:47.649660 26754 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:05:47.649751 26754 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:05:47.651807 26754 out.go:177] * Starting worker node multinode-935345-m02 in cluster multinode-935345
I0524 21:05:47.653228 26754 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:05:47.653249 26754 cache.go:57] Caching tarball of preloaded images
I0524 21:05:47.653343 26754 preload.go:174] Found /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0524 21:05:47.653354 26754 cache.go:60] Finished verifying existence of preloaded tar for v1.27.2 on docker
I0524 21:05:47.653419 26754 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:05:47.653552 26754 cache.go:195] Successfully downloaded all kic artifacts
I0524 21:05:47.653572 26754 start.go:364] acquiring machines lock for multinode-935345-m02: {Name:mk4a40b66c29ad20ca421f9aaaf38de8f4a54848 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0524 21:05:47.653604 26754 start.go:368] acquired machines lock for "multinode-935345-m02" in 20.435µs
I0524 21:05:47.653618 26754 start.go:93] Provisioning new machine with config: &{Name:multinode-935345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684885407-16572@sha256:1678a360739dac48ad7fdd0fcdfd8f9af43ced0b54ec5cd320e5a35a4c50c733 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.27.2 ClusterName:multinode-935345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0524 21:05:47.653681 26754 start.go:125] createHost starting for "m02" (driver="kvm2")
I0524 21:05:47.655703 26754 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0524 21:05:47.655785 26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:05:47.655810 26754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:05:47.669570 26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44675
I0524 21:05:47.669935 26754 main.go:141] libmachine: () Calling .GetVersion
I0524 21:05:47.670393 26754 main.go:141] libmachine: Using API Version 1
I0524 21:05:47.670413 26754 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:05:47.670742 26754 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:05:47.670924 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetMachineName
I0524 21:05:47.671066 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:05:47.671182 26754 start.go:159] libmachine.API.Create for "multinode-935345" (driver="kvm2")
I0524 21:05:47.671212 26754 client.go:168] LocalClient.Create starting
I0524 21:05:47.671242 26754 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem
I0524 21:05:47.671277 26754 main.go:141] libmachine: Decoding PEM data...
I0524 21:05:47.671293 26754 main.go:141] libmachine: Parsing certificate...
I0524 21:05:47.671343 26754 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem
I0524 21:05:47.671365 26754 main.go:141] libmachine: Decoding PEM data...
I0524 21:05:47.671378 26754 main.go:141] libmachine: Parsing certificate...
I0524 21:05:47.671396 26754 main.go:141] libmachine: Running pre-create checks...
I0524 21:05:47.671405 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .PreCreateCheck
I0524 21:05:47.671549 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetConfigRaw
I0524 21:05:47.671855 26754 main.go:141] libmachine: Creating machine...
I0524 21:05:47.671870 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .Create
I0524 21:05:47.672009 26754 main.go:141] libmachine: (multinode-935345-m02) Creating KVM machine...
I0524 21:05:47.673197 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found existing default KVM network
I0524 21:05:47.673325 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found existing private KVM network mk-multinode-935345
I0524 21:05:47.673457 26754 main.go:141] libmachine: (multinode-935345-m02) Setting up store path in /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02 ...
I0524 21:05:47.673481 26754 main.go:141] libmachine: (multinode-935345-m02) Building disk image from file:///home/jenkins/minikube-integration/16572-7844/.minikube/cache/iso/amd64/minikube-v1.30.1-1684885329-16572-amd64.iso
I0524 21:05:47.673535 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:47.673441 27152 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/16572-7844/.minikube
I0524 21:05:47.673643 26754 main.go:141] libmachine: (multinode-935345-m02) Downloading /home/jenkins/minikube-integration/16572-7844/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/16572-7844/.minikube/cache/iso/amd64/minikube-v1.30.1-1684885329-16572-amd64.iso...
I0524 21:05:47.876122 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:47.876009 27152 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/id_rsa...
I0524 21:05:47.950655 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:47.950555 27152 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/multinode-935345-m02.rawdisk...
I0524 21:05:47.950683 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Writing magic tar header
I0524 21:05:47.950728 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Writing SSH key tar header
I0524 21:05:47.950765 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:47.950702 27152 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02 ...
I0524 21:05:47.950838 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02
I0524 21:05:47.950868 26754 main.go:141] libmachine: (multinode-935345-m02) Setting executable bit set on /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02 (perms=drwx------)
I0524 21:05:47.950881 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16572-7844/.minikube/machines
I0524 21:05:47.950898 26754 main.go:141] libmachine: (multinode-935345-m02) Setting executable bit set on /home/jenkins/minikube-integration/16572-7844/.minikube/machines (perms=drwxrwxr-x)
I0524 21:05:47.950913 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16572-7844/.minikube
I0524 21:05:47.950921 26754 main.go:141] libmachine: (multinode-935345-m02) Setting executable bit set on /home/jenkins/minikube-integration/16572-7844/.minikube (perms=drwxr-xr-x)
I0524 21:05:47.950932 26754 main.go:141] libmachine: (multinode-935345-m02) Setting executable bit set on /home/jenkins/minikube-integration/16572-7844 (perms=drwxrwxr-x)
I0524 21:05:47.950939 26754 main.go:141] libmachine: (multinode-935345-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0524 21:05:47.950948 26754 main.go:141] libmachine: (multinode-935345-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0524 21:05:47.950956 26754 main.go:141] libmachine: (multinode-935345-m02) Creating domain...
I0524 21:05:47.950971 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/16572-7844
I0524 21:05:47.950987 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0524 21:05:47.950999 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Checking permissions on dir: /home/jenkins
I0524 21:05:47.951015 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Checking permissions on dir: /home
I0524 21:05:47.951036 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Skipping /home - not owner
I0524 21:05:47.951928 26754 main.go:141] libmachine: (multinode-935345-m02) define libvirt domain using xml:
I0524 21:05:47.951950 26754 main.go:141] libmachine: (multinode-935345-m02) <domain type='kvm'>
I0524 21:05:47.951976 26754 main.go:141] libmachine: (multinode-935345-m02) <name>multinode-935345-m02</name>
I0524 21:05:47.951997 26754 main.go:141] libmachine: (multinode-935345-m02) <memory unit='MiB'>2200</memory>
I0524 21:05:47.952009 26754 main.go:141] libmachine: (multinode-935345-m02) <vcpu>2</vcpu>
I0524 21:05:47.952020 26754 main.go:141] libmachine: (multinode-935345-m02) <features>
I0524 21:05:47.952029 26754 main.go:141] libmachine: (multinode-935345-m02) <acpi/>
I0524 21:05:47.952037 26754 main.go:141] libmachine: (multinode-935345-m02) <apic/>
I0524 21:05:47.952047 26754 main.go:141] libmachine: (multinode-935345-m02) <pae/>
I0524 21:05:47.952060 26754 main.go:141] libmachine: (multinode-935345-m02)
I0524 21:05:47.952092 26754 main.go:141] libmachine: (multinode-935345-m02) </features>
I0524 21:05:47.952118 26754 main.go:141] libmachine: (multinode-935345-m02) <cpu mode='host-passthrough'>
I0524 21:05:47.952130 26754 main.go:141] libmachine: (multinode-935345-m02)
I0524 21:05:47.952139 26754 main.go:141] libmachine: (multinode-935345-m02) </cpu>
I0524 21:05:47.952156 26754 main.go:141] libmachine: (multinode-935345-m02) <os>
I0524 21:05:47.952167 26754 main.go:141] libmachine: (multinode-935345-m02) <type>hvm</type>
I0524 21:05:47.952182 26754 main.go:141] libmachine: (multinode-935345-m02) <boot dev='cdrom'/>
I0524 21:05:47.952194 26754 main.go:141] libmachine: (multinode-935345-m02) <boot dev='hd'/>
I0524 21:05:47.952213 26754 main.go:141] libmachine: (multinode-935345-m02) <bootmenu enable='no'/>
I0524 21:05:47.952234 26754 main.go:141] libmachine: (multinode-935345-m02) </os>
I0524 21:05:47.952248 26754 main.go:141] libmachine: (multinode-935345-m02) <devices>
I0524 21:05:47.952259 26754 main.go:141] libmachine: (multinode-935345-m02) <disk type='file' device='cdrom'>
I0524 21:05:47.952280 26754 main.go:141] libmachine: (multinode-935345-m02) <source file='/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/boot2docker.iso'/>
I0524 21:05:47.952294 26754 main.go:141] libmachine: (multinode-935345-m02) <target dev='hdc' bus='scsi'/>
I0524 21:05:47.952308 26754 main.go:141] libmachine: (multinode-935345-m02) <readonly/>
I0524 21:05:47.952325 26754 main.go:141] libmachine: (multinode-935345-m02) </disk>
I0524 21:05:47.952340 26754 main.go:141] libmachine: (multinode-935345-m02) <disk type='file' device='disk'>
I0524 21:05:47.952356 26754 main.go:141] libmachine: (multinode-935345-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0524 21:05:47.952377 26754 main.go:141] libmachine: (multinode-935345-m02) <source file='/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/multinode-935345-m02.rawdisk'/>
I0524 21:05:47.952391 26754 main.go:141] libmachine: (multinode-935345-m02) <target dev='hda' bus='virtio'/>
I0524 21:05:47.952404 26754 main.go:141] libmachine: (multinode-935345-m02) </disk>
I0524 21:05:47.952424 26754 main.go:141] libmachine: (multinode-935345-m02) <interface type='network'>
I0524 21:05:47.952453 26754 main.go:141] libmachine: (multinode-935345-m02) <source network='mk-multinode-935345'/>
I0524 21:05:47.952467 26754 main.go:141] libmachine: (multinode-935345-m02) <model type='virtio'/>
I0524 21:05:47.952479 26754 main.go:141] libmachine: (multinode-935345-m02) </interface>
I0524 21:05:47.952492 26754 main.go:141] libmachine: (multinode-935345-m02) <interface type='network'>
I0524 21:05:47.952504 26754 main.go:141] libmachine: (multinode-935345-m02) <source network='default'/>
I0524 21:05:47.952518 26754 main.go:141] libmachine: (multinode-935345-m02) <model type='virtio'/>
I0524 21:05:47.952535 26754 main.go:141] libmachine: (multinode-935345-m02) </interface>
I0524 21:05:47.952548 26754 main.go:141] libmachine: (multinode-935345-m02) <serial type='pty'>
I0524 21:05:47.952560 26754 main.go:141] libmachine: (multinode-935345-m02) <target port='0'/>
I0524 21:05:47.952574 26754 main.go:141] libmachine: (multinode-935345-m02) </serial>
I0524 21:05:47.952588 26754 main.go:141] libmachine: (multinode-935345-m02) <console type='pty'>
I0524 21:05:47.952612 26754 main.go:141] libmachine: (multinode-935345-m02) <target type='serial' port='0'/>
I0524 21:05:47.952629 26754 main.go:141] libmachine: (multinode-935345-m02) </console>
I0524 21:05:47.952646 26754 main.go:141] libmachine: (multinode-935345-m02) <rng model='virtio'>
I0524 21:05:47.952659 26754 main.go:141] libmachine: (multinode-935345-m02) <backend model='random'>/dev/random</backend>
I0524 21:05:47.952675 26754 main.go:141] libmachine: (multinode-935345-m02) </rng>
I0524 21:05:47.952688 26754 main.go:141] libmachine: (multinode-935345-m02)
I0524 21:05:47.952707 26754 main.go:141] libmachine: (multinode-935345-m02)
I0524 21:05:47.952724 26754 main.go:141] libmachine: (multinode-935345-m02) </devices>
I0524 21:05:47.952738 26754 main.go:141] libmachine: (multinode-935345-m02) </domain>
I0524 21:05:47.952751 26754 main.go:141] libmachine: (multinode-935345-m02)
I0524 21:05:47.959411 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:8f:8d:6c in network default
I0524 21:05:47.959839 26754 main.go:141] libmachine: (multinode-935345-m02) Ensuring networks are active...
I0524 21:05:47.959863 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:47.960525 26754 main.go:141] libmachine: (multinode-935345-m02) Ensuring network default is active
I0524 21:05:47.960823 26754 main.go:141] libmachine: (multinode-935345-m02) Ensuring network mk-multinode-935345 is active
I0524 21:05:47.961226 26754 main.go:141] libmachine: (multinode-935345-m02) Getting domain xml...
I0524 21:05:47.961983 26754 main.go:141] libmachine: (multinode-935345-m02) Creating domain...
I0524 21:05:49.151701 26754 main.go:141] libmachine: (multinode-935345-m02) Waiting to get IP...
I0524 21:05:49.152717 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:49.153156 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:49.153183 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:49.153116 27152 retry.go:31] will retry after 271.615761ms: waiting for machine to come up
I0524 21:05:49.426586 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:49.426985 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:49.427020 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:49.426935 27152 retry.go:31] will retry after 345.4615ms: waiting for machine to come up
I0524 21:05:49.774500 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:49.774994 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:49.775028 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:49.774980 27152 retry.go:31] will retry after 354.893857ms: waiting for machine to come up
I0524 21:05:50.131467 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:50.131919 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:50.131943 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:50.131871 27152 retry.go:31] will retry after 460.542827ms: waiting for machine to come up
I0524 21:05:50.594393 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:50.594752 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:50.594782 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:50.594695 27152 retry.go:31] will retry after 720.968001ms: waiting for machine to come up
I0524 21:05:51.317485 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:51.318044 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:51.318072 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:51.317986 27152 retry.go:31] will retry after 704.301385ms: waiting for machine to come up
I0524 21:05:52.023783 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:52.024238 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:52.024273 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:52.024172 27152 retry.go:31] will retry after 773.731256ms: waiting for machine to come up
I0524 21:05:52.798976 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:52.799397 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:52.799440 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:52.799349 27152 retry.go:31] will retry after 1.286669433s: waiting for machine to come up
I0524 21:05:54.087680 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:54.088059 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:54.088088 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:54.088009 27152 retry.go:31] will retry after 1.72464763s: waiting for machine to come up
I0524 21:05:55.815000 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:55.815436 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:55.815466 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:55.815398 27152 retry.go:31] will retry after 1.636705285s: waiting for machine to come up
I0524 21:05:57.454214 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:57.454619 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:57.454645 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:57.454571 27152 retry.go:31] will retry after 2.32023824s: waiting for machine to come up
I0524 21:05:59.777492 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:05:59.778032 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:05:59.778061 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:05:59.778004 27152 retry.go:31] will retry after 2.25231859s: waiting for machine to come up
I0524 21:06:02.033640 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:02.033936 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:06:02.033960 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:06:02.033892 27152 retry.go:31] will retry after 3.371443021s: waiting for machine to come up
I0524 21:06:05.408586 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:05.408955 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find current IP address of domain multinode-935345-m02 in network mk-multinode-935345
I0524 21:06:05.408979 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | I0524 21:06:05.408906 27152 retry.go:31] will retry after 4.39210792s: waiting for machine to come up
I0524 21:06:09.803567 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:09.803999 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has current primary IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:09.804018 26754 main.go:141] libmachine: (multinode-935345-m02) Found IP for machine: 192.168.39.200
I0524 21:06:09.804031 26754 main.go:141] libmachine: (multinode-935345-m02) Reserving static IP address...
I0524 21:06:09.804371 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | unable to find host DHCP lease matching {name: "multinode-935345-m02", mac: "52:54:00:cd:96:28", ip: "192.168.39.200"} in network mk-multinode-935345
I0524 21:06:09.874459 26754 main.go:141] libmachine: (multinode-935345-m02) Reserved static IP address: 192.168.39.200
I0524 21:06:09.874493 26754 main.go:141] libmachine: (multinode-935345-m02) Waiting for SSH to be available...
I0524 21:06:09.874510 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Getting to WaitForSSH function...
I0524 21:06:09.877060 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:09.877474 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cd:96:28}
I0524 21:06:09.877512 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:09.877659 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Using SSH client type: external
I0524 21:06:09.877688 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/id_rsa (-rw-------)
I0524 21:06:09.877737 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0524 21:06:09.877765 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | About to run SSH command:
I0524 21:06:09.877790 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | exit 0
I0524 21:06:09.967210 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | SSH cmd err, output: <nil>:
I0524 21:06:09.967430 26754 main.go:141] libmachine: (multinode-935345-m02) KVM machine creation complete!
I0524 21:06:09.967789 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetConfigRaw
I0524 21:06:09.968311 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:06:09.968494 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:06:09.968695 26754 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0524 21:06:09.968714 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetState
I0524 21:06:09.970046 26754 main.go:141] libmachine: Detecting operating system of created instance...
I0524 21:06:09.970058 26754 main.go:141] libmachine: Waiting for SSH to be available...
I0524 21:06:09.970065 26754 main.go:141] libmachine: Getting to WaitForSSH function...
I0524 21:06:09.970071 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:09.972280 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:09.972599 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:09.972622 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:09.972743 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:09.972927 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:09.973094 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:09.973235 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:09.973384 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:06:09.974030 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.200 22 <nil> <nil>}
I0524 21:06:09.974053 26754 main.go:141] libmachine: About to run SSH command:
exit 0
I0524 21:06:10.085569 26754 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0524 21:06:10.085594 26754 main.go:141] libmachine: Detecting the provisioner...
I0524 21:06:10.085606 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:10.088269 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.088594 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:10.088623 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.088750 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:10.088924 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.089094 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.089198 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:10.089343 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:06:10.089731 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.200 22 <nil> <nil>}
I0524 21:06:10.089741 26754 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0524 21:06:10.207467 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-g05a3382-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0524 21:06:10.207551 26754 main.go:141] libmachine: found compatible host: buildroot
I0524 21:06:10.207563 26754 main.go:141] libmachine: Provisioning with buildroot...
I0524 21:06:10.207572 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetMachineName
I0524 21:06:10.207871 26754 buildroot.go:166] provisioning hostname "multinode-935345-m02"
I0524 21:06:10.207900 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetMachineName
I0524 21:06:10.208103 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:10.211176 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.211563 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:10.211594 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.211714 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:10.211893 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.212037 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.212156 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:10.212339 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:06:10.212759 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.200 22 <nil> <nil>}
I0524 21:06:10.212775 26754 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-935345-m02 && echo "multinode-935345-m02" | sudo tee /etc/hostname
I0524 21:06:10.339144 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-935345-m02
I0524 21:06:10.339170 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:10.341723 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.342067 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:10.342101 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.342249 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:10.342431 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.342601 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.342720 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:10.342891 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:06:10.343342 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.200 22 <nil> <nil>}
I0524 21:06:10.343362 26754 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-935345-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-935345-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-935345-m02' | sudo tee -a /etc/hosts;
fi
fi
I0524 21:06:10.466570 26754 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0524 21:06:10.466599 26754 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16572-7844/.minikube CaCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16572-7844/.minikube}
I0524 21:06:10.466620 26754 buildroot.go:174] setting up certificates
I0524 21:06:10.466630 26754 provision.go:83] configureAuth start
I0524 21:06:10.466642 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetMachineName
I0524 21:06:10.466929 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetIP
I0524 21:06:10.469453 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.469776 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:10.469801 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.469940 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:10.472068 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.472425 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:10.472455 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.472605 26754 provision.go:138] copyHostCerts
I0524 21:06:10.472633 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem
I0524 21:06:10.472661 26754 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem, removing ...
I0524 21:06:10.472669 26754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem
I0524 21:06:10.472723 26754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/cert.pem (1123 bytes)
I0524 21:06:10.472806 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem
I0524 21:06:10.472823 26754 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem, removing ...
I0524 21:06:10.472827 26754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem
I0524 21:06:10.472851 26754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/key.pem (1679 bytes)
I0524 21:06:10.472891 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem
I0524 21:06:10.472906 26754 exec_runner.go:144] found /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem, removing ...
I0524 21:06:10.472911 26754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem
I0524 21:06:10.472930 26754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16572-7844/.minikube/ca.pem (1078 bytes)
I0524 21:06:10.472972 26754 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem org=jenkins.multinode-935345-m02 san=[192.168.39.200 192.168.39.200 localhost 127.0.0.1 minikube multinode-935345-m02]
I0524 21:06:10.714600 26754 provision.go:172] copyRemoteCerts
I0524 21:06:10.714649 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0524 21:06:10.714669 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:10.717019 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.717387 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:10.717414 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.717562 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:10.717763 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.717902 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:10.718082 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/id_rsa Username:docker}
I0524 21:06:10.803429 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0524 21:06:10.803506 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0524 21:06:10.825832 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem -> /etc/docker/server.pem
I0524 21:06:10.825885 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0524 21:06:10.847435 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0524 21:06:10.847484 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0524 21:06:10.869611 26754 provision.go:86] duration metric: configureAuth took 402.971507ms
I0524 21:06:10.869633 26754 buildroot.go:189] setting minikube options for container-runtime
I0524 21:06:10.869794 26754 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:06:10.869820 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:06:10.870096 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:10.872655 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.872964 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:10.872989 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.873094 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:10.873280 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.873440 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.873591 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:10.873750 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:06:10.874275 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.200 22 <nil> <nil>}
I0524 21:06:10.874290 26754 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0524 21:06:10.991924 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0524 21:06:10.991946 26754 buildroot.go:70] root file system type: tmpfs
I0524 21:06:10.992054 26754 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0524 21:06:10.992071 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:10.994563 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.994902 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:10.994931 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:10.995091 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:10.995293 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.995461 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:10.995593 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:10.995743 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:06:10.996129 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.200 22 <nil> <nil>}
I0524 21:06:10.996186 26754 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.141"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0524 21:06:11.122816 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.141
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0524 21:06:11.122860 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:11.125398 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:11.125769 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:11.125811 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:11.125938 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:11.126114 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:11.126271 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:11.126377 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:11.126516 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:06:11.126967 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.200 22 <nil> <nil>}
I0524 21:06:11.126985 26754 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0524 21:06:11.904293 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0524 21:06:11.904334 26754 main.go:141] libmachine: Checking connection to Docker...
I0524 21:06:11.904344 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetURL
I0524 21:06:11.905563 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | Using libvirt version 6000000
I0524 21:06:11.907824 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:11.908210 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:11.908247 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:11.908446 26754 main.go:141] libmachine: Docker is up and running!
I0524 21:06:11.908462 26754 main.go:141] libmachine: Reticulating splines...
I0524 21:06:11.908470 26754 client.go:171] LocalClient.Create took 24.237249057s
I0524 21:06:11.908497 26754 start.go:167] duration metric: libmachine.API.Create for "multinode-935345" took 24.237311201s
I0524 21:06:11.908516 26754 start.go:300] post-start starting for "multinode-935345-m02" (driver="kvm2")
I0524 21:06:11.908526 26754 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0524 21:06:11.908546 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:06:11.908746 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0524 21:06:11.908770 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:11.911028 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:11.911354 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:11.911381 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:11.911538 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:11.911683 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:11.911794 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:11.911888 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/id_rsa Username:docker}
I0524 21:06:11.999970 26754 ssh_runner.go:195] Run: cat /etc/os-release
I0524 21:06:12.004057 26754 command_runner.go:130] > NAME=Buildroot
I0524 21:06:12.004075 26754 command_runner.go:130] > VERSION=2021.02.12-1-g05a3382-dirty
I0524 21:06:12.004079 26754 command_runner.go:130] > ID=buildroot
I0524 21:06:12.004084 26754 command_runner.go:130] > VERSION_ID=2021.02.12
I0524 21:06:12.004088 26754 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0524 21:06:12.004372 26754 info.go:137] Remote host: Buildroot 2021.02.12
I0524 21:06:12.004387 26754 filesync.go:126] Scanning /home/jenkins/minikube-integration/16572-7844/.minikube/addons for local assets ...
I0524 21:06:12.004454 26754 filesync.go:126] Scanning /home/jenkins/minikube-integration/16572-7844/.minikube/files for local assets ...
I0524 21:06:12.004542 26754 filesync.go:149] local asset: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem -> 150552.pem in /etc/ssl/certs
I0524 21:06:12.004554 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem -> /etc/ssl/certs/150552.pem
I0524 21:06:12.004657 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0524 21:06:12.013291 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem --> /etc/ssl/certs/150552.pem (1708 bytes)
I0524 21:06:12.037673 26754 start.go:303] post-start completed in 129.141695ms
I0524 21:06:12.037725 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetConfigRaw
I0524 21:06:12.038306 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetIP
I0524 21:06:12.041973 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.042422 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:12.042450 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.042726 26754 profile.go:148] Saving config to /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/config.json ...
I0524 21:06:12.043033 26754 start.go:128] duration metric: createHost completed in 24.389341874s
I0524 21:06:12.043060 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:12.045333 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.045643 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:12.045665 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.045802 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:12.045986 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:12.046124 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:12.046245 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:12.046378 26754 main.go:141] libmachine: Using SSH client type: native
I0524 21:06:12.046962 26754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil> [] 0s} 192.168.39.200 22 <nil> <nil>}
I0524 21:06:12.046979 26754 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0524 21:06:12.163232 26754 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684962372.134521159
I0524 21:06:12.163254 26754 fix.go:207] guest clock: 1684962372.134521159
I0524 21:06:12.163264 26754 fix.go:220] Guest: 2023-05-24 21:06:12.134521159 +0000 UTC Remote: 2023-05-24 21:06:12.043048308 +0000 UTC m=+98.620518311 (delta=91.472851ms)
I0524 21:06:12.163281 26754 fix.go:191] guest clock delta is within tolerance: 91.472851ms
I0524 21:06:12.163287 26754 start.go:83] releasing machines lock for "multinode-935345-m02", held for 24.509674837s
I0524 21:06:12.163307 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:06:12.163684 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetIP
I0524 21:06:12.166170 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.166496 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:12.166525 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.169191 26754 out.go:177] * Found network options:
I0524 21:06:12.170720 26754 out.go:177] - NO_PROXY=192.168.39.141
W0524 21:06:12.172267 26754 proxy.go:119] fail to check proxy env: Error ip not in block
I0524 21:06:12.172290 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:06:12.172811 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:06:12.173012 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .DriverName
I0524 21:06:12.173082 26754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0524 21:06:12.173119 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
W0524 21:06:12.173201 26754 proxy.go:119] fail to check proxy env: Error ip not in block
I0524 21:06:12.173280 26754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0524 21:06:12.173302 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHHostname
I0524 21:06:12.175951 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.176208 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.176404 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:12.176434 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.176583 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:12.176707 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:12.176734 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:12.176749 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:12.176866 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHPort
I0524 21:06:12.176941 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:12.177020 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHKeyPath
I0524 21:06:12.177081 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/id_rsa Username:docker}
I0524 21:06:12.177119 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetSSHUsername
I0524 21:06:12.177227 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345-m02/id_rsa Username:docker}
I0524 21:06:12.261240 26754 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0524 21:06:12.261299 26754 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0524 21:06:12.261368 26754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0524 21:06:12.286029 26754 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0524 21:06:12.286976 26754 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0524 21:06:12.287009 26754 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0524 21:06:12.287017 26754 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
I0524 21:06:12.287100 26754 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0524 21:06:12.306404 26754 docker.go:633] Got preloaded images:
I0524 21:06:12.306429 26754 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
I0524 21:06:12.306475 26754 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0524 21:06:12.317250 26754 command_runner.go:139] > {"Repositories":{}}
I0524 21:06:12.317630 26754 ssh_runner.go:195] Run: which lz4
I0524 21:06:12.321481 26754 command_runner.go:130] > /usr/bin/lz4
I0524 21:06:12.321814 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0524 21:06:12.321904 26754 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0524 21:06:12.326026 26754 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0524 21:06:12.326302 26754 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0524 21:06:12.326336 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412256110 bytes)
I0524 21:06:13.981972 26754 docker.go:597] Took 1.660093 seconds to copy over tarball
I0524 21:06:13.982027 26754 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0524 21:06:16.394698 26754 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.412652699s)
I0524 21:06:16.394720 26754 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0524 21:06:16.430950 26754 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0524 21:06:16.439884 26754 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.7-0":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83":"sha256:86b6af7dd652c1b38118be1c338e
9354b33469e69a218f7e290a0ca5304ad681"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.27.2":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.27.2":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.27.2":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d
32174dc13e7dee"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.27.2":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
I0524 21:06:16.440059 26754 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
I0524 21:06:16.455857 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:06:16.553740 26754 ssh_runner.go:195] Run: sudo systemctl restart docker
I0524 21:06:20.041824 26754 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.488050855s)
I0524 21:06:20.041864 26754 start.go:481] detecting cgroup driver to use...
I0524 21:06:20.041955 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 21:06:20.059042 26754 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0524 21:06:20.059333 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0524 21:06:20.068571 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0524 21:06:20.077535 26754 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0524 21:06:20.077580 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0524 21:06:20.086852 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 21:06:20.095765 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0524 21:06:20.105113 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0524 21:06:20.114540 26754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0524 21:06:20.123945 26754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0524 21:06:20.133098 26754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0524 21:06:20.141433 26754 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0524 21:06:20.141515 26754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0524 21:06:20.149645 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:06:20.248744 26754 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0524 21:06:20.266435 26754 start.go:481] detecting cgroup driver to use...
I0524 21:06:20.266521 26754 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0524 21:06:20.286764 26754 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0524 21:06:20.286786 26754 command_runner.go:130] > [Unit]
I0524 21:06:20.286796 26754 command_runner.go:130] > Description=Docker Application Container Engine
I0524 21:06:20.286813 26754 command_runner.go:130] > Documentation=https://docs.docker.com
I0524 21:06:20.286822 26754 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0524 21:06:20.286834 26754 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0524 21:06:20.286850 26754 command_runner.go:130] > StartLimitBurst=3
I0524 21:06:20.286857 26754 command_runner.go:130] > StartLimitIntervalSec=60
I0524 21:06:20.286863 26754 command_runner.go:130] > [Service]
I0524 21:06:20.286869 26754 command_runner.go:130] > Type=notify
I0524 21:06:20.286875 26754 command_runner.go:130] > Restart=on-failure
I0524 21:06:20.286882 26754 command_runner.go:130] > Environment=NO_PROXY=192.168.39.141
I0524 21:06:20.286894 26754 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0524 21:06:20.286911 26754 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0524 21:06:20.286925 26754 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0524 21:06:20.286940 26754 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0524 21:06:20.286955 26754 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0524 21:06:20.286970 26754 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0524 21:06:20.286981 26754 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0524 21:06:20.286998 26754 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0524 21:06:20.287012 26754 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0524 21:06:20.287020 26754 command_runner.go:130] > ExecStart=
I0524 21:06:20.287048 26754 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0524 21:06:20.287063 26754 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0524 21:06:20.287075 26754 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0524 21:06:20.287089 26754 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0524 21:06:20.287098 26754 command_runner.go:130] > LimitNOFILE=infinity
I0524 21:06:20.287107 26754 command_runner.go:130] > LimitNPROC=infinity
I0524 21:06:20.287117 26754 command_runner.go:130] > LimitCORE=infinity
I0524 21:06:20.287127 26754 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0524 21:06:20.287137 26754 command_runner.go:130] > # Only systemd 226 and above support this version.
I0524 21:06:20.287148 26754 command_runner.go:130] > TasksMax=infinity
I0524 21:06:20.287157 26754 command_runner.go:130] > TimeoutStartSec=0
I0524 21:06:20.287171 26754 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0524 21:06:20.287181 26754 command_runner.go:130] > Delegate=yes
I0524 21:06:20.287190 26754 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0524 21:06:20.287213 26754 command_runner.go:130] > KillMode=process
I0524 21:06:20.287223 26754 command_runner.go:130] > [Install]
I0524 21:06:20.287231 26754 command_runner.go:130] > WantedBy=multi-user.target
I0524 21:06:20.287303 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0524 21:06:20.303977 26754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0524 21:06:20.321088 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0524 21:06:20.333064 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 21:06:20.345101 26754 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0524 21:06:20.375609 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0524 21:06:20.388219 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0524 21:06:20.405185 26754 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0524 21:06:20.405252 26754 ssh_runner.go:195] Run: which cri-dockerd
I0524 21:06:20.408581 26754 command_runner.go:130] > /usr/bin/cri-dockerd
I0524 21:06:20.408750 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0524 21:06:20.416566 26754 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0524 21:06:20.431709 26754 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0524 21:06:20.528586 26754 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0524 21:06:20.631261 26754 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
I0524 21:06:20.631296 26754 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0524 21:06:20.648085 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:06:20.750245 26754 ssh_runner.go:195] Run: sudo systemctl restart docker
I0524 21:06:22.143524 26754 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.393216807s)
I0524 21:06:22.143608 26754 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0524 21:06:22.255048 26754 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0524 21:06:22.370191 26754 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0524 21:06:22.481413 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:06:22.590657 26754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0524 21:06:22.607279 26754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0524 21:06:22.717356 26754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0524 21:06:22.799200 26754 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0524 21:06:22.799268 26754 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0524 21:06:22.805080 26754 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0524 21:06:22.805101 26754 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0524 21:06:22.805109 26754 command_runner.go:130] > Device: 16h/22d Inode: 968 Links: 1
I0524 21:06:22.805119 26754 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0524 21:06:22.805127 26754 command_runner.go:130] > Access: 2023-05-24 21:06:22.713979535 +0000
I0524 21:06:22.805135 26754 command_runner.go:130] > Modify: 2023-05-24 21:06:22.713979535 +0000
I0524 21:06:22.805143 26754 command_runner.go:130] > Change: 2023-05-24 21:06:22.716982246 +0000
I0524 21:06:22.805149 26754 command_runner.go:130] > Birth: -
I0524 21:06:22.805642 26754 start.go:549] Will wait 60s for crictl version
I0524 21:06:22.805691 26754 ssh_runner.go:195] Run: which crictl
I0524 21:06:22.809786 26754 command_runner.go:130] > /usr/bin/crictl
I0524 21:06:22.809834 26754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0524 21:06:22.846722 26754 command_runner.go:130] > Version: 0.1.0
I0524 21:06:22.846902 26754 command_runner.go:130] > RuntimeName: docker
I0524 21:06:22.847002 26754 command_runner.go:130] > RuntimeVersion: 24.0.1
I0524 21:06:22.847168 26754 command_runner.go:130] > RuntimeApiVersion: v1alpha2
I0524 21:06:22.849010 26754 start.go:565] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.1
RuntimeApiVersion: v1alpha2
I0524 21:06:22.849065 26754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0524 21:06:22.875905 26754 command_runner.go:130] > 24.0.1
I0524 21:06:22.876050 26754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0524 21:06:22.900662 26754 command_runner.go:130] > 24.0.1
I0524 21:06:22.904080 26754 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.1 ...
I0524 21:06:22.905719 26754 out.go:177] - env NO_PROXY=192.168.39.141
I0524 21:06:22.907362 26754 main.go:141] libmachine: (multinode-935345-m02) Calling .GetIP
I0524 21:06:22.910217 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:22.910570 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:96:28", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:06:02 +0000 UTC Type:0 Mac:52:54:00:cd:96:28 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:multinode-935345-m02 Clientid:01:52:54:00:cd:96:28}
I0524 21:06:22.910606 26754 main.go:141] libmachine: (multinode-935345-m02) DBG | domain multinode-935345-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:cd:96:28 in network mk-multinode-935345
I0524 21:06:22.910784 26754 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0524 21:06:22.914892 26754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0524 21:06:22.927674 26754 certs.go:56] Setting up /home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345 for IP: 192.168.39.200
I0524 21:06:22.927703 26754 certs.go:190] acquiring lock for shared ca certs: {Name:mkd255b1ad7adc894443e9a2618d4730aa631e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0524 21:06:22.927855 26754 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.key
I0524 21:06:22.927905 26754 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.key
I0524 21:06:22.927928 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0524 21:06:22.927947 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0524 21:06:22.927964 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0524 21:06:22.927980 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0524 21:06:22.928062 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/15055.pem (1338 bytes)
W0524 21:06:22.928102 26754 certs.go:433] ignoring /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/15055_empty.pem, impossibly tiny 0 bytes
I0524 21:06:22.928124 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca-key.pem (1679 bytes)
I0524 21:06:22.928157 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/ca.pem (1078 bytes)
I0524 21:06:22.928189 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/cert.pem (1123 bytes)
I0524 21:06:22.928219 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/home/jenkins/minikube-integration/16572-7844/.minikube/certs/key.pem (1679 bytes)
I0524 21:06:22.928278 26754 certs.go:437] found cert: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem (1708 bytes)
I0524 21:06:22.928305 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem -> /usr/share/ca-certificates/150552.pem
I0524 21:06:22.928317 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0524 21:06:22.928332 26754 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16572-7844/.minikube/certs/15055.pem -> /usr/share/ca-certificates/15055.pem
I0524 21:06:22.928675 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0524 21:06:22.951513 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0524 21:06:22.973872 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0524 21:06:22.996184 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0524 21:06:23.018866 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/files/etc/ssl/certs/150552.pem --> /usr/share/ca-certificates/150552.pem (1708 bytes)
I0524 21:06:23.040958 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0524 21:06:23.063593 26754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16572-7844/.minikube/certs/15055.pem --> /usr/share/ca-certificates/15055.pem (1338 bytes)
I0524 21:06:23.086193 26754 ssh_runner.go:195] Run: openssl version
I0524 21:06:23.091493 26754 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0524 21:06:23.091554 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15055.pem && ln -fs /usr/share/ca-certificates/15055.pem /etc/ssl/certs/15055.pem"
I0524 21:06:23.100562 26754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15055.pem
I0524 21:06:23.104922 26754 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 24 20:52 /usr/share/ca-certificates/15055.pem
I0524 21:06:23.105127 26754 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 20:52 /usr/share/ca-certificates/15055.pem
I0524 21:06:23.105169 26754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15055.pem
I0524 21:06:23.110393 26754 command_runner.go:130] > 51391683
I0524 21:06:23.110607 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15055.pem /etc/ssl/certs/51391683.0"
I0524 21:06:23.120238 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150552.pem && ln -fs /usr/share/ca-certificates/150552.pem /etc/ssl/certs/150552.pem"
I0524 21:06:23.129620 26754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150552.pem
I0524 21:06:23.133786 26754 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 24 20:52 /usr/share/ca-certificates/150552.pem
I0524 21:06:23.134040 26754 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 20:52 /usr/share/ca-certificates/150552.pem
I0524 21:06:23.134078 26754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150552.pem
I0524 21:06:23.139329 26754 command_runner.go:130] > 3ec20f2e
I0524 21:06:23.139384 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150552.pem /etc/ssl/certs/3ec20f2e.0"
I0524 21:06:23.148978 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0524 21:06:23.158290 26754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0524 21:06:23.162748 26754 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 24 20:47 /usr/share/ca-certificates/minikubeCA.pem
I0524 21:06:23.162768 26754 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 20:47 /usr/share/ca-certificates/minikubeCA.pem
I0524 21:06:23.162801 26754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0524 21:06:23.168579 26754 command_runner.go:130] > b5213941
I0524 21:06:23.168626 26754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0524 21:06:23.178222 26754 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0524 21:06:23.182340 26754 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0524 21:06:23.182375 26754 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0524 21:06:23.182429 26754 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0524 21:06:23.208313 26754 command_runner.go:130] > cgroupfs
I0524 21:06:23.209115 26754 cni.go:84] Creating CNI manager for ""
I0524 21:06:23.209134 26754 cni.go:136] 2 nodes found, recommending kindnet
I0524 21:06:23.209146 26754 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0524 21:06:23.209163 26754 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-935345 NodeName:multinode-935345-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0524 21:06:23.209269 26754 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.200
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-935345-m02"
kubeletExtraArgs:
node-ip: 192.168.39.200
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.27.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0524 21:06:23.209314 26754 kubeadm.go:971] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-935345-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
[Install]
config:
{KubernetesVersion:v1.27.2 ClusterName:multinode-935345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0524 21:06:23.209369 26754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
I0524 21:06:23.218113 26754 command_runner.go:130] > kubeadm
I0524 21:06:23.218127 26754 command_runner.go:130] > kubectl
I0524 21:06:23.218135 26754 command_runner.go:130] > kubelet
I0524 21:06:23.218153 26754 binaries.go:44] Found k8s binaries, skipping transfer
I0524 21:06:23.218197 26754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0524 21:06:23.226648 26754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
I0524 21:06:23.243353 26754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0524 21:06:23.259347 26754 ssh_runner.go:195] Run: grep 192.168.39.141 control-plane.minikube.internal$ /etc/hosts
I0524 21:06:23.263478 26754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.141 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0524 21:06:23.276347 26754 host.go:66] Checking if "multinode-935345" exists ...
I0524 21:06:23.276620 26754 config.go:182] Loaded profile config "multinode-935345": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 21:06:23.276763 26754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0524 21:06:23.276810 26754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 21:06:23.290656 26754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40539
I0524 21:06:23.291067 26754 main.go:141] libmachine: () Calling .GetVersion
I0524 21:06:23.291543 26754 main.go:141] libmachine: Using API Version 1
I0524 21:06:23.291560 26754 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 21:06:23.291842 26754 main.go:141] libmachine: () Calling .GetMachineName
I0524 21:06:23.292045 26754 main.go:141] libmachine: (multinode-935345) Calling .DriverName
I0524 21:06:23.292211 26754 start.go:301] JoinCluster: &{Name:multinode-935345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16572/minikube-v1.30.1-1684885329-16572-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684885407-16572@sha256:1678a360739dac48ad7fdd0fcdfd8f9af43ced0b54ec5cd320e5a35a4c50c733 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:multinode-935345 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0524 21:06:23.292301 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0"
I0524 21:06:23.292320 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHHostname
I0524 21:06:23.294966 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:06:23.295308 26754 main.go:141] libmachine: (multinode-935345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:95:e5", ip: ""} in network mk-multinode-935345: {Iface:virbr1 ExpiryTime:2023-05-24 22:04:48 +0000 UTC Type:0 Mac:52:54:00:5f:95:e5 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-935345 Clientid:01:52:54:00:5f:95:e5}
I0524 21:06:23.295336 26754 main.go:141] libmachine: (multinode-935345) DBG | domain multinode-935345 has defined IP address 192.168.39.141 and MAC address 52:54:00:5f:95:e5 in network mk-multinode-935345
I0524 21:06:23.295406 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHPort
I0524 21:06:23.295564 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHKeyPath
I0524 21:06:23.295693 26754 main.go:141] libmachine: (multinode-935345) Calling .GetSSHUsername
I0524 21:06:23.295814 26754 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16572-7844/.minikube/machines/multinode-935345/id_rsa Username:docker}
I0524 21:06:23.473375 26754 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token qiz1xo.mwoapqiiuy0d2yod --discovery-token-ca-cert-hash sha256:cb9ff1f5bf5bbc94cbf036a3e2c087ff0ad7b74e6aafd0cc8516058c6c6a695c
I0524 21:06:23.473420 26754 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.200 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0524 21:06:23.473448 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qiz1xo.mwoapqiiuy0d2yod --discovery-token-ca-cert-hash sha256:cb9ff1f5bf5bbc94cbf036a3e2c087ff0ad7b74e6aafd0cc8516058c6c6a695c --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-935345-m02"
I0524 21:06:23.561577 26754 command_runner.go:130] > [preflight] Running pre-flight checks
I0524 21:06:23.787641 26754 command_runner.go:130] > [preflight] Reading configuration from the cluster...
I0524 21:06:23.787671 26754 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0524 21:06:23.822431 26754 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0524 21:06:23.822462 26754 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0524 21:06:23.822471 26754 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0524 21:06:23.931910 26754 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0524 21:06:25.465142 26754 command_runner.go:130] > This node has joined the cluster:
I0524 21:06:25.465172 26754 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
I0524 21:06:25.465182 26754 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
I0524 21:06:25.465192 26754 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
I0524 21:06:25.467098 26754 command_runner.go:130] ! W0524 21:06:23.546209 1319 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0524 21:06:25.467123 26754 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0524 21:06:25.467145 26754 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qiz1xo.mwoapqiiuy0d2yod --discovery-token-ca-cert-hash sha256:cb9ff1f5bf5bbc94cbf036a3e2c087ff0ad7b74e6aafd0cc8516058c6c6a695c --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-935345-m02": (1.993681037s)
I0524 21:06:25.467164 26754 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0524 21:06:25.736353 26754 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
I0524 21:06:25.736409 26754 start.go:303] JoinCluster complete in 2.444193314s
I0524 21:06:25.736422 26754 cni.go:84] Creating CNI manager for ""
I0524 21:06:25.736428 26754 cni.go:136] 2 nodes found, recommending kindnet
I0524 21:06:25.736481 26754 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0524 21:06:25.742108 26754 command_runner.go:130] > File: /opt/cni/bin/portmap
I0524 21:06:25.742133 26754 command_runner.go:130] > Size: 2798344 Blocks: 5472 IO Block: 4096 regular file
I0524 21:06:25.742144 26754 command_runner.go:130] > Device: 11h/17d Inode: 3542 Links: 1
I0524 21:06:25.742154 26754 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0524 21:06:25.742162 26754 command_runner.go:130] > Access: 2023-05-24 21:04:46.041271815 +0000
I0524 21:06:25.742173 26754 command_runner.go:130] > Modify: 2023-05-24 03:44:28.000000000 +0000
I0524 21:06:25.742180 26754 command_runner.go:130] > Change: 2023-05-24 21:04:44.319271815 +0000
I0524 21:06:25.742189 26754 command_runner.go:130] > Birth: -
I0524 21:06:25.742299 26754 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
I0524 21:06:25.742315 26754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0524 21:06:25.772717 26754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0524 21:06:26.193374 26754 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0524 21:06:26.198480 26754 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0524 21:06:26.202955 26754 command_runner.go:130] > serviceaccount/kindnet unchanged
I0524 21:06:26.217529 26754 command_runner.go:130] > daemonset.apps/kindnet configured
I0524 21:06:26.221046 26754 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/16572-7844/kubeconfig
I0524 21:06:26.221246 26754 kapi.go:59] client config for multinode-935345: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.crt", KeyFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.key", CAFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0524 21:06:26.221526 26754 round_trippers.go:463] GET https://192.168.39.141:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0524 21:06:26.221539 26754 round_trippers.go:469] Request Headers:
I0524 21:06:26.221546 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:26.221552 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:26.224244 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:26.224267 26754 round_trippers.go:577] Response Headers:
I0524 21:06:26.224278 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:26.224286 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:26.224295 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:26.224304 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:26.224319 26754 round_trippers.go:580] Content-Length: 291
I0524 21:06:26.224332 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:26 GMT
I0524 21:06:26.224340 26754 round_trippers.go:580] Audit-Id: 3d7a84ea-b91b-409a-a1b6-0ece5b10b8c6
I0524 21:06:26.224360 26754 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ae8d78ae-9ffd-4b14-9e3b-aca097f80b28","resourceVersion":"455","creationTimestamp":"2023-05-24T21:05:21Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0524 21:06:26.224451 26754 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-935345" context rescaled to 1 replicas
I0524 21:06:26.224484 26754 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.200 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0524 21:06:26.227334 26754 out.go:177] * Verifying Kubernetes components...
I0524 21:06:26.228523 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0524 21:06:26.248506 26754 loader.go:373] Config loaded from file: /home/jenkins/minikube-integration/16572-7844/kubeconfig
I0524 21:06:26.248815 26754 kapi.go:59] client config for multinode-935345: &rest.Config{Host:"https://192.168.39.141:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.crt", KeyFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/profiles/multinode-935345/client.key", CAFile:"/home/jenkins/minikube-integration/16572-7844/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0524 21:06:26.249147 26754 node_ready.go:35] waiting up to 6m0s for node "multinode-935345-m02" to be "Ready" ...
I0524 21:06:26.249236 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:26.249247 26754 round_trippers.go:469] Request Headers:
I0524 21:06:26.249257 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:26.249264 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:26.252202 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:26.252227 26754 round_trippers.go:577] Response Headers:
I0524 21:06:26.252234 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:26.252241 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:26.252246 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:26.252252 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:26.252260 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:26 GMT
I0524 21:06:26.252268 26754 round_trippers.go:580] Audit-Id: 936ffec4-501b-4f0c-baa6-2edbcce35e2f
I0524 21:06:26.252386 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"515","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3765 chars]
I0524 21:06:26.753511 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:26.753546 26754 round_trippers.go:469] Request Headers:
I0524 21:06:26.753557 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:26.753567 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:26.756120 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:26.756145 26754 round_trippers.go:577] Response Headers:
I0524 21:06:26.756154 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:26.756162 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:26.756170 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:26.756178 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:26.756187 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:26 GMT
I0524 21:06:26.756197 26754 round_trippers.go:580] Audit-Id: a3fa947b-ce25-4188-9f70-d7b8c0d026ba
I0524 21:06:26.756717 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"515","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3765 chars]
I0524 21:06:27.253358 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:27.253388 26754 round_trippers.go:469] Request Headers:
I0524 21:06:27.253399 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:27.253407 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:27.256054 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:27.256079 26754 round_trippers.go:577] Response Headers:
I0524 21:06:27.256090 26754 round_trippers.go:580] Audit-Id: 45f8e8b5-be5b-4ad3-8e84-7d72b674574a
I0524 21:06:27.256099 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:27.256107 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:27.256115 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:27.256123 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:27.256131 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:27 GMT
I0524 21:06:27.256297 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"515","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3765 chars]
I0524 21:06:27.753401 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:27.753421 26754 round_trippers.go:469] Request Headers:
I0524 21:06:27.753442 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:27.753448 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:27.755846 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:27.755859 26754 round_trippers.go:577] Response Headers:
I0524 21:06:27.755864 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:27.755870 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:27 GMT
I0524 21:06:27.755875 26754 round_trippers.go:580] Audit-Id: 730cd0ee-4a8d-4ede-88e9-072d58c10f77
I0524 21:06:27.755882 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:27.755888 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:27.755893 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:27.756172 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"515","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3765 chars]
I0524 21:06:28.253887 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:28.253910 26754 round_trippers.go:469] Request Headers:
I0524 21:06:28.253922 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:28.253934 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:28.257012 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:28.257034 26754 round_trippers.go:577] Response Headers:
I0524 21:06:28.257041 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:28.257053 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:28.257058 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:28.257064 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:28.257073 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:28 GMT
I0524 21:06:28.257081 26754 round_trippers.go:580] Audit-Id: 3de646b7-eb92-4539-b9a3-5207154b7d2f
I0524 21:06:28.257306 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"515","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3765 chars]
I0524 21:06:28.257629 26754 node_ready.go:58] node "multinode-935345-m02" has status "Ready":"False"
I0524 21:06:28.752937 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:28.752956 26754 round_trippers.go:469] Request Headers:
I0524 21:06:28.752964 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:28.752972 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:28.755732 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:28.755750 26754 round_trippers.go:577] Response Headers:
I0524 21:06:28.755757 26754 round_trippers.go:580] Audit-Id: b4429aff-520d-4930-b8f8-e318d20e696d
I0524 21:06:28.755762 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:28.755770 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:28.755779 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:28.755791 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:28.755799 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:28 GMT
I0524 21:06:28.756005 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"515","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3765 chars]
I0524 21:06:29.253736 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:29.253758 26754 round_trippers.go:469] Request Headers:
I0524 21:06:29.253766 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:29.253773 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:29.257057 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:29.257080 26754 round_trippers.go:577] Response Headers:
I0524 21:06:29.257091 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:29.257099 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:29.257105 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:29 GMT
I0524 21:06:29.257111 26754 round_trippers.go:580] Audit-Id: 1a234e64-ff22-47cd-ab97-b8aea2ece946
I0524 21:06:29.257117 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:29.257125 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:29.257389 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:29.753027 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:29.753050 26754 round_trippers.go:469] Request Headers:
I0524 21:06:29.753059 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:29.753066 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:29.756095 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:29.756113 26754 round_trippers.go:577] Response Headers:
I0524 21:06:29.756119 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:29.756125 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:29.756131 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:29.756137 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:29 GMT
I0524 21:06:29.756142 26754 round_trippers.go:580] Audit-Id: fe203598-7115-4563-bd64-93da56f4a806
I0524 21:06:29.756148 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:29.756288 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:30.252918 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:30.252938 26754 round_trippers.go:469] Request Headers:
I0524 21:06:30.252946 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:30.252953 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:30.256823 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:30.256843 26754 round_trippers.go:577] Response Headers:
I0524 21:06:30.256853 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:30.256863 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:30.256875 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:30.256882 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:30.256888 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:30 GMT
I0524 21:06:30.256894 26754 round_trippers.go:580] Audit-Id: 1fd0ea13-084a-4519-8aee-79af3aff9dec
I0524 21:06:30.257168 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:30.753864 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:30.753883 26754 round_trippers.go:469] Request Headers:
I0524 21:06:30.753891 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:30.753897 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:30.756579 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:30.756601 26754 round_trippers.go:577] Response Headers:
I0524 21:06:30.756610 26754 round_trippers.go:580] Audit-Id: ca246e79-0e67-4a79-b57a-af74f9342d6e
I0524 21:06:30.756617 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:30.756628 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:30.756639 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:30.756647 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:30.756658 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:30 GMT
I0524 21:06:30.756802 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:30.757083 26754 node_ready.go:58] node "multinode-935345-m02" has status "Ready":"False"
I0524 21:06:31.253442 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:31.253477 26754 round_trippers.go:469] Request Headers:
I0524 21:06:31.253490 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:31.253501 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:31.256643 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:31.256670 26754 round_trippers.go:577] Response Headers:
I0524 21:06:31.256680 26754 round_trippers.go:580] Audit-Id: 0d0b62d1-e365-4fc7-9493-69072fd56538
I0524 21:06:31.256689 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:31.256699 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:31.256708 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:31.256729 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:31.256738 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:31 GMT
I0524 21:06:31.257348 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:31.752982 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:31.753003 26754 round_trippers.go:469] Request Headers:
I0524 21:06:31.753011 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:31.753022 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:31.756110 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:31.756133 26754 round_trippers.go:577] Response Headers:
I0524 21:06:31.756143 26754 round_trippers.go:580] Audit-Id: af63a997-c181-4dc9-a084-52e26c0f3bca
I0524 21:06:31.756151 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:31.756162 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:31.756174 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:31.756185 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:31.756197 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:31 GMT
I0524 21:06:31.756718 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:32.253360 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:32.253385 26754 round_trippers.go:469] Request Headers:
I0524 21:06:32.253396 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:32.253404 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:32.256002 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:32.256020 26754 round_trippers.go:577] Response Headers:
I0524 21:06:32.256030 26754 round_trippers.go:580] Audit-Id: 4fcd91e0-8c2f-417d-9d53-f1df1cd4f645
I0524 21:06:32.256036 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:32.256044 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:32.256052 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:32.256069 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:32.256078 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:32 GMT
I0524 21:06:32.256298 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:32.753288 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:32.753310 26754 round_trippers.go:469] Request Headers:
I0524 21:06:32.753318 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:32.753325 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:32.756552 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:32.756572 26754 round_trippers.go:577] Response Headers:
I0524 21:06:32.756582 26754 round_trippers.go:580] Audit-Id: f1a16816-fe9d-4046-8c51-84bfa5fe720e
I0524 21:06:32.756592 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:32.756601 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:32.756611 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:32.756620 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:32.756631 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:32 GMT
I0524 21:06:32.756978 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:32.757294 26754 node_ready.go:58] node "multinode-935345-m02" has status "Ready":"False"
I0524 21:06:33.253692 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:33.253718 26754 round_trippers.go:469] Request Headers:
I0524 21:06:33.253731 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:33.253740 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:33.256682 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:33.256704 26754 round_trippers.go:577] Response Headers:
I0524 21:06:33.256715 26754 round_trippers.go:580] Audit-Id: fba4e680-a2ab-459a-a9fa-e3d44ceb16c5
I0524 21:06:33.256724 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:33.256740 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:33.256745 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:33.256754 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:33.256763 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:33 GMT
I0524 21:06:33.257193 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:33.753524 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:33.753551 26754 round_trippers.go:469] Request Headers:
I0524 21:06:33.753563 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:33.753573 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:33.757075 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:33.757097 26754 round_trippers.go:577] Response Headers:
I0524 21:06:33.757106 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:33.757113 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:33.757121 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:33.757129 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:33 GMT
I0524 21:06:33.757139 26754 round_trippers.go:580] Audit-Id: 921b8fae-0615-4aea-a014-a8432347a199
I0524 21:06:33.757152 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:33.757281 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:34.252870 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:34.252894 26754 round_trippers.go:469] Request Headers:
I0524 21:06:34.252903 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:34.252910 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:34.255204 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:34.255230 26754 round_trippers.go:577] Response Headers:
I0524 21:06:34.255240 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:34.255247 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:34.255256 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:34 GMT
I0524 21:06:34.255264 26754 round_trippers.go:580] Audit-Id: da1421ba-7c18-4e89-8048-2cbb40ee7e93
I0524 21:06:34.255273 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:34.255290 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:34.255501 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:34.753200 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:34.753228 26754 round_trippers.go:469] Request Headers:
I0524 21:06:34.753240 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:34.753250 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:34.757577 26754 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0524 21:06:34.757604 26754 round_trippers.go:577] Response Headers:
I0524 21:06:34.757615 26754 round_trippers.go:580] Audit-Id: 3f2616d4-0ff7-48ec-827d-f51217729486
I0524 21:06:34.757623 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:34.757632 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:34.757640 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:34.757653 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:34.757666 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:34 GMT
I0524 21:06:34.758262 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"525","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3874 chars]
I0524 21:06:34.758534 26754 node_ready.go:58] node "multinode-935345-m02" has status "Ready":"False"
I0524 21:06:35.253201 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:35.253224 26754 round_trippers.go:469] Request Headers:
I0524 21:06:35.253232 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:35.253239 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:35.256251 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:35.256279 26754 round_trippers.go:577] Response Headers:
I0524 21:06:35.256289 26754 round_trippers.go:580] Audit-Id: 5963c0b4-d3fe-4be4-9049-3ad0a619b904
I0524 21:06:35.256297 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:35.256304 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:35.256312 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:35.256320 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:35.256328 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:35 GMT
I0524 21:06:35.256811 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"540","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4266 chars]
I0524 21:06:35.753198 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:35.753230 26754 round_trippers.go:469] Request Headers:
I0524 21:06:35.753239 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:35.753245 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:35.756304 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:35.756320 26754 round_trippers.go:577] Response Headers:
I0524 21:06:35.756327 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:35.756332 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:35.756338 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:35.756343 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:35 GMT
I0524 21:06:35.756348 26754 round_trippers.go:580] Audit-Id: 1db6b340-173e-48bf-b400-1b481730d6c0
I0524 21:06:35.756353 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:35.756651 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"540","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4266 chars]
I0524 21:06:36.253310 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:36.253337 26754 round_trippers.go:469] Request Headers:
I0524 21:06:36.253347 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:36.253355 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:36.256104 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:36.256131 26754 round_trippers.go:577] Response Headers:
I0524 21:06:36.256141 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:36.256151 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:36.256162 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:36 GMT
I0524 21:06:36.256170 26754 round_trippers.go:580] Audit-Id: 4cefd9f7-c70f-4b12-b9db-a62ee62e60a4
I0524 21:06:36.256179 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:36.256187 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:36.256268 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"540","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4266 chars]
I0524 21:06:36.752816 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:36.752842 26754 round_trippers.go:469] Request Headers:
I0524 21:06:36.752850 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:36.752861 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:36.755548 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:36.755566 26754 round_trippers.go:577] Response Headers:
I0524 21:06:36.755572 26754 round_trippers.go:580] Audit-Id: 9b55d312-b074-402a-a430-f7802d6f823b
I0524 21:06:36.755578 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:36.755583 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:36.755591 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:36.755604 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:36.755613 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:36 GMT
I0524 21:06:36.755756 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"540","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4266 chars]
I0524 21:06:37.253364 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:37.253391 26754 round_trippers.go:469] Request Headers:
I0524 21:06:37.253401 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:37.253408 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:37.256230 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:37.256252 26754 round_trippers.go:577] Response Headers:
I0524 21:06:37.256265 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:37.256273 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:37 GMT
I0524 21:06:37.256281 26754 round_trippers.go:580] Audit-Id: 1b0f4868-8065-4dc3-8951-69cab4d859bd
I0524 21:06:37.256289 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:37.256297 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:37.256310 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:37.256629 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"540","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4266 chars]
I0524 21:06:37.256894 26754 node_ready.go:58] node "multinode-935345-m02" has status "Ready":"False"
I0524 21:06:37.753770 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:37.753792 26754 round_trippers.go:469] Request Headers:
I0524 21:06:37.753803 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:37.753811 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:37.757163 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:37.757188 26754 round_trippers.go:577] Response Headers:
I0524 21:06:37.757198 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:37.757207 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:37.757216 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:37.757226 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:37.757235 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:37 GMT
I0524 21:06:37.757249 26754 round_trippers.go:580] Audit-Id: 7038cd4a-8cb1-4e1c-b0a9-474298422243
I0524 21:06:37.757427 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"540","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4266 chars]
I0524 21:06:38.252968 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:38.253003 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.253012 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.253018 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.255590 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:38.255606 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.255613 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.255621 26754 round_trippers.go:580] Audit-Id: 47a17887-1611-4ae4-8ec1-08d86f7b5ec2
I0524 21:06:38.255630 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.255642 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.255650 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.255659 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.255731 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"546","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4132 chars]
I0524 21:06:38.255990 26754 node_ready.go:49] node "multinode-935345-m02" has status "Ready":"True"
I0524 21:06:38.256003 26754 node_ready.go:38] duration metric: took 12.006835053s waiting for node "multinode-935345-m02" to be "Ready" ...
I0524 21:06:38.256010 26754 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0524 21:06:38.256056 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods
I0524 21:06:38.256063 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.256070 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.256076 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.259691 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:38.259710 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.259718 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.259724 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.259733 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.259739 26754 round_trippers.go:580] Audit-Id: 3992a9f2-1252-457d-ba4b-0ea940b68501
I0524 21:06:38.259746 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.259751 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.260759 26754 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"546"},"items":[{"metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"451","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67520 chars]
I0524 21:06:38.262779 26754 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-b58rt" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.262851 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-b58rt
I0524 21:06:38.262861 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.262872 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.262883 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.265318 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:38.265331 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.265337 26754 round_trippers.go:580] Audit-Id: 81f97726-61f3-43ac-ba71-6d0618a3b8b2
I0524 21:06:38.265343 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.265348 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.265354 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.265360 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.265365 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.265616 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-b58rt","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"96aeb17f-d77a-4748-a3fb-a5f21e810413","resourceVersion":"451","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e051df0b-db78-4325-85c4-3f40ff451836","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e051df0b-db78-4325-85c4-3f40ff451836\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
I0524 21:06:38.265978 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:06:38.265990 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.266000 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.266009 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.267965 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:06:38.267979 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.267984 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.267990 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.267995 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.268000 26754 round_trippers.go:580] Audit-Id: 62503b7c-e803-47d5-a062-3e0d119b6237
I0524 21:06:38.268005 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.268037 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.268160 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"462","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0524 21:06:38.268388 26754 pod_ready.go:92] pod "coredns-5d78c9869d-b58rt" in "kube-system" namespace has status "Ready":"True"
I0524 21:06:38.268398 26754 pod_ready.go:81] duration metric: took 5.600492ms waiting for pod "coredns-5d78c9869d-b58rt" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.268404 26754 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.268437 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-935345
I0524 21:06:38.268445 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.268452 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.268458 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.270738 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:38.270750 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.270756 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.270761 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.270766 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.270771 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.270777 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.270782 26754 round_trippers.go:580] Audit-Id: f7d7d55b-2eec-48e2-8c54-7dd3842c14b0
I0524 21:06:38.271279 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-935345","namespace":"kube-system","uid":"e8724e83-c511-481d-a9ca-8c0943c03817","resourceVersion":"426","creationTimestamp":"2023-05-24T21:05:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.141:2379","kubernetes.io/config.hash":"5dc37d6c21cc4c3942f55949a2300f81","kubernetes.io/config.mirror":"5dc37d6c21cc4c3942f55949a2300f81","kubernetes.io/config.seen":"2023-05-24T21:05:22.144619745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
I0524 21:06:38.271582 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:06:38.271593 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.271600 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.271606 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.273403 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:06:38.273415 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.273421 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.273427 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.273432 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.273437 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.273442 26754 round_trippers.go:580] Audit-Id: 1e704524-31b7-4eac-89f1-dd56c3b3436f
I0524 21:06:38.273447 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.273764 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"462","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0524 21:06:38.274067 26754 pod_ready.go:92] pod "etcd-multinode-935345" in "kube-system" namespace has status "Ready":"True"
I0524 21:06:38.274082 26754 pod_ready.go:81] duration metric: took 5.67194ms waiting for pod "etcd-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.274098 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.274143 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-935345
I0524 21:06:38.274153 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.274164 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.274177 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.276211 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:38.276227 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.276235 26754 round_trippers.go:580] Audit-Id: 057f38a1-0e3a-4c89-b45d-c6fefd9331a7
I0524 21:06:38.276244 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.276255 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.276264 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.276272 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.276283 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.276524 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-935345","namespace":"kube-system","uid":"4e1d7b9b-2385-4595-80da-1cbc3e9804e6","resourceVersion":"427","creationTimestamp":"2023-05-24T21:05:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.141:8443","kubernetes.io/config.hash":"cbb29c660ce956b2d6c62dd44f97e9c5","kubernetes.io/config.mirror":"cbb29c660ce956b2d6c62dd44f97e9c5","kubernetes.io/config.seen":"2023-05-24T21:05:22.144620796Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
I0524 21:06:38.276927 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:06:38.276940 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.276950 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.276960 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.278736 26754 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0524 21:06:38.278748 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.278757 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.278766 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.278774 26754 round_trippers.go:580] Audit-Id: 5855be06-17c0-4f7f-9be2-2c5e187b3f7c
I0524 21:06:38.278782 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.278793 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.278806 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.279030 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"462","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0524 21:06:38.279244 26754 pod_ready.go:92] pod "kube-apiserver-multinode-935345" in "kube-system" namespace has status "Ready":"True"
I0524 21:06:38.279254 26754 pod_ready.go:81] duration metric: took 5.150178ms waiting for pod "kube-apiserver-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.279261 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.279294 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-935345
I0524 21:06:38.279302 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.279308 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.279314 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.282957 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:38.282969 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.282975 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.282982 26754 round_trippers.go:580] Audit-Id: 4ae5017a-d808-4eae-aaf9-510a485a677d
I0524 21:06:38.283024 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.283041 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.283049 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.283057 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.283179 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-935345","namespace":"kube-system","uid":"58257693-9611-4d6d-a90d-c54c61f9bdb6","resourceVersion":"428","creationTimestamp":"2023-05-24T21:05:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4ee83c04bd6438a0f47b0e07ad320ac0","kubernetes.io/config.mirror":"4ee83c04bd6438a0f47b0e07ad320ac0","kubernetes.io/config.seen":"2023-05-24T21:05:22.144621666Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
I0524 21:06:38.283570 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:06:38.283585 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.283595 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.283605 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.286711 26754 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0524 21:06:38.286727 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.286737 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.286746 26754 round_trippers.go:580] Audit-Id: f99eb075-2319-4bfb-8e34-aa513551e922
I0524 21:06:38.286755 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.286765 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.286778 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.286787 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.286881 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"462","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0524 21:06:38.287189 26754 pod_ready.go:92] pod "kube-controller-manager-multinode-935345" in "kube-system" namespace has status "Ready":"True"
I0524 21:06:38.287203 26754 pod_ready.go:81] duration metric: took 7.936259ms waiting for pod "kube-controller-manager-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.287213 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gmjsk" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.453755 26754 request.go:628] Waited for 166.485473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmjsk
I0524 21:06:38.454983 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gmjsk
I0524 21:06:38.454999 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.455010 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.455020 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.457745 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:38.457764 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.457773 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.457781 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.457789 26754 round_trippers.go:580] Audit-Id: 0250e1a6-fed4-4043-9eee-2c0460019712
I0524 21:06:38.457799 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.457811 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.457821 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.458057 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-gmjsk","generateName":"kube-proxy-","namespace":"kube-system","uid":"b372194d-5433-4031-a6e2-eed172ffe8e5","resourceVersion":"527","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"892e89de-a123-493e-b092-5b426c6044d4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"892e89de-a123-493e-b092-5b426c6044d4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5543 chars]
I0524 21:06:38.653772 26754 request.go:628] Waited for 195.269906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:38.653819 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345-m02
I0524 21:06:38.653824 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.653832 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.653838 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.656180 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:38.656201 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.656211 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.656219 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.656228 26754 round_trippers.go:580] Audit-Id: cc76f3af-00f1-4c07-a8f8-8e735d4bb193
I0524 21:06:38.656237 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.656247 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.656257 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.656375 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345-m02","uid":"bd0ff0b7-8db6-4437-b99c-a7e583c9c4c8","resourceVersion":"546","creationTimestamp":"2023-05-24T21:06:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4132 chars]
I0524 21:06:38.656666 26754 pod_ready.go:92] pod "kube-proxy-gmjsk" in "kube-system" namespace has status "Ready":"True"
I0524 21:06:38.656682 26754 pod_ready.go:81] duration metric: took 369.461946ms waiting for pod "kube-proxy-gmjsk" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.656694 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5gdf" in "kube-system" namespace to be "Ready" ...
I0524 21:06:38.853022 26754 request.go:628] Waited for 196.268394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5gdf
I0524 21:06:38.853090 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5gdf
I0524 21:06:38.853096 26754 round_trippers.go:469] Request Headers:
I0524 21:06:38.853103 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:38.853110 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:38.855736 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:38.855762 26754 round_trippers.go:577] Response Headers:
I0524 21:06:38.855771 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:38.855779 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:38.855787 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:38 GMT
I0524 21:06:38.855794 26754 round_trippers.go:580] Audit-Id: fecaad40-2362-4ab5-97f9-383142a59923
I0524 21:06:38.855802 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:38.855810 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:38.855897 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5gdf","generateName":"kube-proxy-","namespace":"kube-system","uid":"5f24e81d-a75c-49a0-919b-2266a2a0fd94","resourceVersion":"420","creationTimestamp":"2023-05-24T21:05:34Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"892e89de-a123-493e-b092-5b426c6044d4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"892e89de-a123-493e-b092-5b426c6044d4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5535 chars]
I0524 21:06:39.053752 26754 request.go:628] Waited for 197.391386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:06:39.053811 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:06:39.053815 26754 round_trippers.go:469] Request Headers:
I0524 21:06:39.053823 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:39.053830 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:39.056275 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:39.056291 26754 round_trippers.go:577] Response Headers:
I0524 21:06:39.056300 26754 round_trippers.go:580] Audit-Id: 5b558f8a-2351-4b18-aaa0-2f49b4990996
I0524 21:06:39.056309 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:39.056319 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:39.056328 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:39.056340 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:39.056350 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:39 GMT
I0524 21:06:39.056470 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"462","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0524 21:06:39.056868 26754 pod_ready.go:92] pod "kube-proxy-j5gdf" in "kube-system" namespace has status "Ready":"True"
I0524 21:06:39.056891 26754 pod_ready.go:81] duration metric: took 400.188583ms waiting for pod "kube-proxy-j5gdf" in "kube-system" namespace to be "Ready" ...
I0524 21:06:39.056904 26754 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:06:39.253354 26754 request.go:628] Waited for 196.385972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-935345
I0524 21:06:39.253421 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-935345
I0524 21:06:39.253426 26754 round_trippers.go:469] Request Headers:
I0524 21:06:39.253434 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:39.253441 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:39.256248 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:39.256271 26754 round_trippers.go:577] Response Headers:
I0524 21:06:39.256280 26754 round_trippers.go:580] Audit-Id: f90f78cb-f141-405d-9746-52ac2fd6c0fc
I0524 21:06:39.256288 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:39.256303 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:39.256311 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:39.256322 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:39.256331 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:39 GMT
I0524 21:06:39.256507 26754 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-935345","namespace":"kube-system","uid":"389454ac-5dbb-4456-a505-cf6a21fb81d4","resourceVersion":"429","creationTimestamp":"2023-05-24T21:05:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"541a493605483a25e1c768fdd305f2b9","kubernetes.io/config.mirror":"541a493605483a25e1c768fdd305f2b9","kubernetes.io/config.seen":"2023-05-24T21:05:22.144616107Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T21:05:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
I0524 21:06:39.453322 26754 request.go:628] Waited for 196.372594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:06:39.453401 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes/multinode-935345
I0524 21:06:39.453410 26754 round_trippers.go:469] Request Headers:
I0524 21:06:39.453420 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:39.453431 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:39.456367 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:39.456387 26754 round_trippers.go:577] Response Headers:
I0524 21:06:39.456396 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:39 GMT
I0524 21:06:39.456404 26754 round_trippers.go:580] Audit-Id: ae3906f4-ef61-4a7c-a166-3c8b5f86d691
I0524 21:06:39.456412 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:39.456421 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:39.456430 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:39.456445 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:39.456790 26754 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"462","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T21:05:18Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
I0524 21:06:39.457081 26754 pod_ready.go:92] pod "kube-scheduler-multinode-935345" in "kube-system" namespace has status "Ready":"True"
I0524 21:06:39.457093 26754 pod_ready.go:81] duration metric: took 400.181549ms waiting for pod "kube-scheduler-multinode-935345" in "kube-system" namespace to be "Ready" ...
I0524 21:06:39.457104 26754 pod_ready.go:38] duration metric: took 1.20108675s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0524 21:06:39.457162 26754 system_svc.go:44] waiting for kubelet service to be running ....
I0524 21:06:39.457205 26754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0524 21:06:39.471365 26754 system_svc.go:56] duration metric: took 14.195679ms WaitForService to wait for kubelet.
I0524 21:06:39.471389 26754 kubeadm.go:581] duration metric: took 13.246875431s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0524 21:06:39.471413 26754 node_conditions.go:102] verifying NodePressure condition ...
I0524 21:06:39.653852 26754 request.go:628] Waited for 182.35764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.141:8443/api/v1/nodes
I0524 21:06:39.653898 26754 round_trippers.go:463] GET https://192.168.39.141:8443/api/v1/nodes
I0524 21:06:39.653903 26754 round_trippers.go:469] Request Headers:
I0524 21:06:39.653910 26754 round_trippers.go:473] Accept: application/json, */*
I0524 21:06:39.653917 26754 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0524 21:06:39.656501 26754 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0524 21:06:39.656517 26754 round_trippers.go:577] Response Headers:
I0524 21:06:39.656524 26754 round_trippers.go:580] Audit-Id: 283a6fbf-898c-4fcf-a809-b4ddf9c1c27f
I0524 21:06:39.656530 26754 round_trippers.go:580] Cache-Control: no-cache, private
I0524 21:06:39.656535 26754 round_trippers.go:580] Content-Type: application/json
I0524 21:06:39.656541 26754 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 00564cb3-b342-40b4-969a-6b25b535f5ea
I0524 21:06:39.656546 26754 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 073d57f7-87dc-4beb-8b30-299b05440170
I0524 21:06:39.656553 26754 round_trippers.go:580] Date: Wed, 24 May 2023 21:06:39 GMT
I0524 21:06:39.656718 26754 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"547"},"items":[{"metadata":{"name":"multinode-935345","uid":"3d89fa4c-80e4-48ca-88b5-8007b1fd043d","resourceVersion":"462","creationTimestamp":"2023-05-24T21:05:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-935345","kubernetes.io/os":"linux","minikube.k8s.io/commit":"8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb","minikube.k8s.io/name":"multinode-935345","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T21_05_23_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10016 chars]
I0524 21:06:39.657288 26754 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0524 21:06:39.657308 26754 node_conditions.go:123] node cpu capacity is 2
I0524 21:06:39.657317 26754 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0524 21:06:39.657321 26754 node_conditions.go:123] node cpu capacity is 2
I0524 21:06:39.657325 26754 node_conditions.go:105] duration metric: took 185.908275ms to run NodePressure ...
I0524 21:06:39.657334 26754 start.go:228] waiting for startup goroutines ...
I0524 21:06:39.657356 26754 start.go:242] writing updated cluster config ...
I0524 21:06:39.657621 26754 ssh_runner.go:195] Run: rm -f paused
I0524 21:06:39.702323 26754 start.go:568] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
I0524 21:06:39.704580 26754 out.go:177] * Done! kubectl is now configured to use "multinode-935345" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Wed 2023-05-24 21:04:45 UTC, ends at Wed 2023-05-24 21:08:12 UTC. --
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.299574249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.314385669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.314611136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.314626211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.314634410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:05:44 multinode-935345 cri-dockerd[1183]: time="2023-05-24T21:05:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f4cb5c83eefb0a29ce417a61a960257d5fe9204bb4b7049f1025307c5080bd4/resolv.conf as [nameserver 192.168.122.1]"
May 24 21:05:44 multinode-935345 cri-dockerd[1183]: time="2023-05-24T21:05:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8b9f330af55490ac0ebc4e21c58461ba111a1a21e931ec9f3de2ba5c5a31f913/resolv.conf as [nameserver 192.168.122.1]"
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.974268208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.974657886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.974884426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.975066461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.996227022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.998917668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:05:44 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.999218670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 21:05:45 multinode-935345 dockerd[985]: time="2023-05-24T21:05:44.999686755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:06:40 multinode-935345 dockerd[985]: time="2023-05-24T21:06:40.872232739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 21:06:40 multinode-935345 dockerd[985]: time="2023-05-24T21:06:40.872461128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:06:40 multinode-935345 dockerd[985]: time="2023-05-24T21:06:40.872492211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 21:06:40 multinode-935345 dockerd[985]: time="2023-05-24T21:06:40.872512394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:06:41 multinode-935345 cri-dockerd[1183]: time="2023-05-24T21:06:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/713913137d7739419c44b5d7319d90b4e61d9ef210592ecfeb8c93f62789f683/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
May 24 21:06:42 multinode-935345 cri-dockerd[1183]: time="2023-05-24T21:06:42Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
May 24 21:06:42 multinode-935345 dockerd[985]: time="2023-05-24T21:06:42.474039759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 21:06:42 multinode-935345 dockerd[985]: time="2023-05-24T21:06:42.474584327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 21:06:42 multinode-935345 dockerd[985]: time="2023-05-24T21:06:42.474618616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 21:06:42 multinode-935345 dockerd[985]: time="2023-05-24T21:06:42.474645816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
ec6e302bd604e gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 About a minute ago Running busybox 0 713913137d773
8381242ccf076 6e38f40d628db 2 minutes ago Running storage-provisioner 0 8b9f330af5549
f3bab5d56b8bf ead0a4a53df89 2 minutes ago Running coredns 0 6f4cb5c83eefb
aa4f844d3d504 kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974 2 minutes ago Running kindnet-cni 0 c7fa230e07028
769ded3a44de5 b8aa50768fd67 2 minutes ago Running kube-proxy 0 fa58c98c9c368
2164ab09872bb 89e70da428d29 2 minutes ago Running kube-scheduler 0 8f6502a32f939
24b46fe69de0c 86b6af7dd652c 2 minutes ago Running etcd 0 1b0784185ef02
a72f4c51f2761 ac2b7465ebba9 2 minutes ago Running kube-controller-manager 0 64fcaa45dd5ee
9d0416b86190e c5b13e4f7806d 2 minutes ago Running kube-apiserver 0 3e1e2aefd4c1b
*
* ==> coredns [f3bab5d56b8b] <==
* [INFO] 10.244.1.2:43315 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000234524s
[INFO] 10.244.0.3:40508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080635s
[INFO] 10.244.0.3:51509 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002265135s
[INFO] 10.244.0.3:33824 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000804s
[INFO] 10.244.0.3:42789 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069816s
[INFO] 10.244.0.3:59380 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001549937s
[INFO] 10.244.0.3:54411 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076355s
[INFO] 10.244.0.3:38143 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065654s
[INFO] 10.244.0.3:42993 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000156759s
[INFO] 10.244.1.2:36857 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123768s
[INFO] 10.244.1.2:60845 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083886s
[INFO] 10.244.1.2:45782 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092439s
[INFO] 10.244.1.2:45825 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000145172s
[INFO] 10.244.0.3:45035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104709s
[INFO] 10.244.0.3:43801 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084024s
[INFO] 10.244.0.3:39788 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083414s
[INFO] 10.244.0.3:46717 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138496s
[INFO] 10.244.1.2:34759 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143895s
[INFO] 10.244.1.2:51107 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000279665s
[INFO] 10.244.1.2:35318 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017134s
[INFO] 10.244.1.2:47747 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000279832s
[INFO] 10.244.0.3:45010 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108011s
[INFO] 10.244.0.3:47741 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011079s
[INFO] 10.244.0.3:44871 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090275s
[INFO] 10.244.0.3:33679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000217653s
*
* ==> describe nodes <==
* Name: multinode-935345
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-935345
kubernetes.io/os=linux
minikube.k8s.io/commit=8b8a62a7d458701638c67feeb1a6ff50fc0c5fbb
minikube.k8s.io/name=multinode-935345
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_05_24T21_05_23_0700
minikube.k8s.io/version=v1.30.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 24 May 2023 21:05:18 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-935345
AcquireTime: <unset>
RenewTime: Wed, 24 May 2023 21:08:05 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 24 May 2023 21:06:54 +0000 Wed, 24 May 2023 21:05:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 24 May 2023 21:06:54 +0000 Wed, 24 May 2023 21:05:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 24 May 2023 21:06:54 +0000 Wed, 24 May 2023 21:05:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 24 May 2023 21:06:54 +0000 Wed, 24 May 2023 21:05:43 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.141
Hostname: multinode-935345
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: b74c12062fb641dc9abf9ca154d43768
System UUID: b74c1206-2fb6-41dc-9abf-9ca154d43768
Boot ID: 21baabb0-e87a-42fe-a45c-90d448e3098d
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.1
Kubelet Version: v1.27.2
Kube-Proxy Version: v1.27.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-67b7f59bb-w5dwz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system coredns-5d78c9869d-b58rt 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 2m38s
kube-system etcd-multinode-935345 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 2m50s
kube-system kindnet-lkcmf 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 2m38s
kube-system kube-apiserver-multinode-935345 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m50s
kube-system kube-controller-manager-multinode-935345 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m50s
kube-system kube-proxy-j5gdf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m38s
kube-system kube-scheduler-multinode-935345 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m50s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m36s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m36s kube-proxy
Normal Starting 2m59s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m59s (x8 over 2m59s) kubelet Node multinode-935345 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m59s (x8 over 2m59s) kubelet Node multinode-935345 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m59s (x7 over 2m59s) kubelet Node multinode-935345 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m59s kubelet Updated Node Allocatable limit across pods
Normal Starting 2m50s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m50s kubelet Node multinode-935345 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m50s kubelet Node multinode-935345 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m50s kubelet Node multinode-935345 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m50s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m39s node-controller Node multinode-935345 event: Registered Node multinode-935345 in Controller
Normal NodeReady 2m29s kubelet Node multinode-935345 status is now: NodeReady
Name: multinode-935345-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-935345-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 24 May 2023 21:06:24 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-935345-m02
AcquireTime: <unset>
RenewTime: Wed, 24 May 2023 21:08:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 24 May 2023 21:06:55 +0000 Wed, 24 May 2023 21:06:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 24 May 2023 21:06:55 +0000 Wed, 24 May 2023 21:06:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 24 May 2023 21:06:55 +0000 Wed, 24 May 2023 21:06:24 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 24 May 2023 21:06:55 +0000 Wed, 24 May 2023 21:06:37 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.200
Hostname: multinode-935345-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 25c6f79a96e14bed942ad097b74a8702
System UUID: 25c6f79a-96e1-4bed-942a-d097b74a8702
Boot ID: e4aea58e-a099-4d92-94e0-c97af112d1e6
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.1
Kubelet Version: v1.27.2
Kube-Proxy Version: v1.27.2
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-67b7f59bb-m29kw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system kindnet-5xnjb 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 108s
kube-system kube-proxy-gmjsk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 108s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 105s kube-proxy
Normal Starting 108s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 108s (x2 over 108s) kubelet Node multinode-935345-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 108s (x2 over 108s) kubelet Node multinode-935345-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 108s (x2 over 108s) kubelet Node multinode-935345-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 108s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 104s node-controller Node multinode-935345-m02 event: Registered Node multinode-935345-m02 in Controller
Normal NodeReady 95s kubelet Node multinode-935345-m02 status is now: NodeReady
Name: multinode-935345-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-935345-m03
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 24 May 2023 21:07:25 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-935345-m03
AcquireTime: <unset>
RenewTime: Wed, 24 May 2023 21:07:45 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 24 May 2023 21:07:38 +0000 Wed, 24 May 2023 21:07:25 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 24 May 2023 21:07:38 +0000 Wed, 24 May 2023 21:07:25 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 24 May 2023 21:07:38 +0000 Wed, 24 May 2023 21:07:25 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 24 May 2023 21:07:38 +0000 Wed, 24 May 2023 21:07:38 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.9
Hostname: multinode-935345-m03
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 62cebb9458ec40d0bc816efd39933b2a
System UUID: 62cebb94-58ec-40d0-bc81-6efd39933b2a
Boot ID: 94ef35bb-dbcb-4e07-952f-ae23d9a14153
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.1
Kubelet Version: v1.27.2
Kube-Proxy Version: v1.27.2
PodCIDR: 10.244.2.0/24
PodCIDRs: 10.244.2.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-vptt5 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 47s
kube-system kube-proxy-hwllb 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 47s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 45s kube-proxy
Normal Starting 47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 47s (x2 over 47s) kubelet Node multinode-935345-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 47s (x2 over 47s) kubelet Node multinode-935345-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 47s (x2 over 47s) kubelet Node multinode-935345-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 47s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 44s node-controller Node multinode-935345-m03 event: Registered Node multinode-935345-m03 in Controller
Normal NodeReady 34s kubelet Node multinode-935345-m03 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.070180] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.970613] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.312384] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.135804] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +4.997246] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +7.298103] systemd-fstab-generator[543]: Ignoring "noauto" for root device
[ +0.112082] systemd-fstab-generator[554]: Ignoring "noauto" for root device
[May24 21:05] systemd-fstab-generator[734]: Ignoring "noauto" for root device
[ +3.328101] kauditd_printk_skb: 14 callbacks suppressed
[ +0.350996] systemd-fstab-generator[907]: Ignoring "noauto" for root device
[ +0.272465] systemd-fstab-generator[946]: Ignoring "noauto" for root device
[ +0.107968] systemd-fstab-generator[957]: Ignoring "noauto" for root device
[ +0.118788] systemd-fstab-generator[970]: Ignoring "noauto" for root device
[ +1.502070] systemd-fstab-generator[1128]: Ignoring "noauto" for root device
[ +0.102669] systemd-fstab-generator[1139]: Ignoring "noauto" for root device
[ +0.113788] systemd-fstab-generator[1150]: Ignoring "noauto" for root device
[ +0.098421] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
[ +0.203602] systemd-fstab-generator[1175]: Ignoring "noauto" for root device
[ +4.405394] systemd-fstab-generator[1438]: Ignoring "noauto" for root device
[ +0.528345] kauditd_printk_skb: 68 callbacks suppressed
[ +8.246765] systemd-fstab-generator[2340]: Ignoring "noauto" for root device
[ +20.421380] kauditd_printk_skb: 14 callbacks suppressed
*
* ==> etcd [24b46fe69de0] <==
* {"level":"info","ts":"2023-05-24T21:05:16.421Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.39.141:2380"}
{"level":"info","ts":"2023-05-24T21:05:16.421Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"2398e045949c73cb","initial-advertise-peer-urls":["https://192.168.39.141:2380"],"listen-peer-urls":["https://192.168.39.141:2380"],"advertise-client-urls":["https://192.168.39.141:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.141:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-05-24T21:05:16.421Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-05-24T21:05:17.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb is starting a new election at term 1"}
{"level":"info","ts":"2023-05-24T21:05:17.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became pre-candidate at term 1"}
{"level":"info","ts":"2023-05-24T21:05:17.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgPreVoteResp from 2398e045949c73cb at term 1"}
{"level":"info","ts":"2023-05-24T21:05:17.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became candidate at term 2"}
{"level":"info","ts":"2023-05-24T21:05:17.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb received MsgVoteResp from 2398e045949c73cb at term 2"}
{"level":"info","ts":"2023-05-24T21:05:17.174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2398e045949c73cb became leader at term 2"}
{"level":"info","ts":"2023-05-24T21:05:17.175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2398e045949c73cb elected leader 2398e045949c73cb at term 2"}
{"level":"info","ts":"2023-05-24T21:05:17.176Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-24T21:05:17.177Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2398e045949c73cb","local-member-attributes":"{Name:multinode-935345 ClientURLs:[https://192.168.39.141:2379]}","request-path":"/0/members/2398e045949c73cb/attributes","cluster-id":"bf8381628c3e4cea","publish-timeout":"7s"}
{"level":"info","ts":"2023-05-24T21:05:17.177Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-05-24T21:05:17.178Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf8381628c3e4cea","local-member-id":"2398e045949c73cb","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-24T21:05:17.178Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-24T21:05:17.178Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-05-24T21:05:17.178Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-05-24T21:05:17.178Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-05-24T21:05:17.179Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-05-24T21:05:17.180Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.39.141:2379"}
{"level":"info","ts":"2023-05-24T21:05:17.185Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-05-24T21:06:17.664Z","caller":"traceutil/trace.go:171","msg":"trace[214395885] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"174.323897ms","start":"2023-05-24T21:06:17.490Z","end":"2023-05-24T21:06:17.664Z","steps":["trace[214395885] 'process raft request' (duration: 173.720963ms)"],"step_count":1}
{"level":"warn","ts":"2023-05-24T21:06:18.350Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.065587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2023-05-24T21:06:18.351Z","caller":"traceutil/trace.go:171","msg":"trace[1296186894] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:481; }","duration":"172.266724ms","start":"2023-05-24T21:06:18.178Z","end":"2023-05-24T21:06:18.351Z","steps":["trace[1296186894] 'count revisions from in-memory index tree' (duration: 172.001313ms)"],"step_count":1}
{"level":"info","ts":"2023-05-24T21:07:18.201Z","caller":"traceutil/trace.go:171","msg":"trace[1696431093] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"192.519309ms","start":"2023-05-24T21:07:18.008Z","end":"2023-05-24T21:07:18.201Z","steps":["trace[1696431093] 'process raft request' (duration: 192.385133ms)"],"step_count":1}
*
* ==> kernel <==
* 21:08:12 up 3 min, 0 users, load average: 0.22, 0.23, 0.10
Linux multinode-935345 5.10.57 #1 SMP Wed May 24 02:58:02 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [aa4f844d3d50] <==
* I0524 21:07:32.304391 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.9 Flags: [] Table: 0}
I0524 21:07:42.319138 1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
I0524 21:07:42.319190 1 main.go:227] handling current node
I0524 21:07:42.319202 1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
I0524 21:07:42.319207 1 main.go:250] Node multinode-935345-m02 has CIDR [10.244.1.0/24]
I0524 21:07:42.319790 1 main.go:223] Handling node with IPs: map[192.168.39.9:{}]
I0524 21:07:42.319828 1 main.go:250] Node multinode-935345-m03 has CIDR [10.244.2.0/24]
I0524 21:07:52.330911 1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
I0524 21:07:52.331087 1 main.go:227] handling current node
I0524 21:07:52.331187 1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
I0524 21:07:52.331214 1 main.go:250] Node multinode-935345-m02 has CIDR [10.244.1.0/24]
I0524 21:07:52.331463 1 main.go:223] Handling node with IPs: map[192.168.39.9:{}]
I0524 21:07:52.331530 1 main.go:250] Node multinode-935345-m03 has CIDR [10.244.2.0/24]
I0524 21:08:02.340039 1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
I0524 21:08:02.340090 1 main.go:227] handling current node
I0524 21:08:02.340119 1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
I0524 21:08:02.340127 1 main.go:250] Node multinode-935345-m02 has CIDR [10.244.1.0/24]
I0524 21:08:02.340532 1 main.go:223] Handling node with IPs: map[192.168.39.9:{}]
I0524 21:08:02.340549 1 main.go:250] Node multinode-935345-m03 has CIDR [10.244.2.0/24]
I0524 21:08:12.354936 1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
I0524 21:08:12.354960 1 main.go:227] handling current node
I0524 21:08:12.354980 1 main.go:223] Handling node with IPs: map[192.168.39.200:{}]
I0524 21:08:12.354986 1 main.go:250] Node multinode-935345-m02 has CIDR [10.244.1.0/24]
I0524 21:08:12.355094 1 main.go:223] Handling node with IPs: map[192.168.39.9:{}]
I0524 21:08:12.355099 1 main.go:250] Node multinode-935345-m03 has CIDR [10.244.2.0/24]
*
* ==> kube-apiserver [9d0416b86190] <==
* I0524 21:05:18.916146 1 shared_informer.go:318] Caches are synced for configmaps
I0524 21:05:18.916387 1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I0524 21:05:18.916651 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
E0524 21:05:18.919470 1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
I0524 21:05:18.920478 1 controller.go:624] quota admission added evaluator for: namespaces
I0524 21:05:18.924895 1 cache.go:39] Caches are synced for autoregister controller
I0524 21:05:18.925261 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0524 21:05:18.947390 1 shared_informer.go:318] Caches are synced for node_authorizer
I0524 21:05:19.124711 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I0524 21:05:19.379353 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0524 21:05:19.734783 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0524 21:05:19.739165 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0524 21:05:19.739209 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0524 21:05:20.510726 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0524 21:05:20.551123 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0524 21:05:20.679462 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
W0524 21:05:20.695961 1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.39.141]
I0524 21:05:20.696897 1 controller.go:624] quota admission added evaluator for: endpoints
I0524 21:05:20.705109 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0524 21:05:20.819477 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0524 21:05:21.986114 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0524 21:05:22.007802 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
I0524 21:05:22.021187 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I0524 21:05:34.372865 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
I0524 21:05:34.532236 1 controller.go:624] quota admission added evaluator for: replicasets.apps
*
* ==> kube-controller-manager [a72f4c51f276] <==
* I0524 21:05:34.409944 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lkcmf"
I0524 21:05:34.541556 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
I0524 21:05:34.681403 1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-7gq4r"
I0524 21:05:34.692045 1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-b58rt"
I0524 21:05:34.907636 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
I0524 21:05:34.941796 1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-7gq4r"
I0524 21:05:48.720613 1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
I0524 21:06:24.761218 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-935345-m02\" does not exist"
I0524 21:06:24.793213 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5xnjb"
I0524 21:06:24.793531 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gmjsk"
I0524 21:06:24.802917 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-935345-m02" podCIDRs=[10.244.1.0/24]
I0524 21:06:28.726239 1 event.go:307] "Event occurred" object="multinode-935345-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-935345-m02 event: Registered Node multinode-935345-m02 in Controller"
I0524 21:06:28.726418 1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-935345-m02"
W0524 21:06:37.818660 1 topologycache.go:232] Can't get CPU or zone information for multinode-935345-m02 node
I0524 21:06:40.367712 1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
I0524 21:06:40.389769 1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-m29kw"
I0524 21:06:40.407788 1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-w5dwz"
I0524 21:07:25.464829 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-935345-m03\" does not exist"
W0524 21:07:25.467754 1 topologycache.go:232] Can't get CPU or zone information for multinode-935345-m02 node
I0524 21:07:25.496240 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hwllb"
I0524 21:07:25.497481 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vptt5"
I0524 21:07:25.503521 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-935345-m03" podCIDRs=[10.244.2.0/24]
I0524 21:07:28.749682 1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-935345-m03"
I0524 21:07:28.750119 1 event.go:307] "Event occurred" object="multinode-935345-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-935345-m03 event: Registered Node multinode-935345-m03 in Controller"
W0524 21:07:38.838196 1 topologycache.go:232] Can't get CPU or zone information for multinode-935345-m02 node
*
* ==> kube-proxy [769ded3a44de] <==
* I0524 21:05:35.645010 1 node.go:141] Successfully retrieved node IP: 192.168.39.141
I0524 21:05:35.645102 1 server_others.go:110] "Detected node IP" address="192.168.39.141"
I0524 21:05:35.645174 1 server_others.go:551] "Using iptables proxy"
I0524 21:05:35.737613 1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
I0524 21:05:35.737630 1 server_others.go:190] "Using iptables Proxier"
I0524 21:05:35.737656 1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0524 21:05:35.738026 1 server.go:657] "Version info" version="v1.27.2"
I0524 21:05:35.738037 1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0524 21:05:35.738776 1 config.go:188] "Starting service config controller"
I0524 21:05:35.738793 1 shared_informer.go:311] Waiting for caches to sync for service config
I0524 21:05:35.738811 1 config.go:97] "Starting endpoint slice config controller"
I0524 21:05:35.738814 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0524 21:05:35.739723 1 config.go:315] "Starting node config controller"
I0524 21:05:35.739731 1 shared_informer.go:311] Waiting for caches to sync for node config
I0524 21:05:35.839749 1 shared_informer.go:318] Caches are synced for endpoint slice config
I0524 21:05:35.839823 1 shared_informer.go:318] Caches are synced for service config
I0524 21:05:35.840082 1 shared_informer.go:318] Caches are synced for node config
*
* ==> kube-scheduler [2164ab09872b] <==
* W0524 21:05:19.733594 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0524 21:05:19.733649 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0524 21:05:19.758005 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0524 21:05:19.758059 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0524 21:05:19.777124 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0524 21:05:19.777244 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0524 21:05:19.867757 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0524 21:05:19.868038 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0524 21:05:19.931920 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0524 21:05:19.932217 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0524 21:05:19.958916 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0524 21:05:19.959873 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0524 21:05:19.996494 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0524 21:05:19.996925 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0524 21:05:20.014053 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0524 21:05:20.014180 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0524 21:05:20.039622 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0524 21:05:20.039646 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0524 21:05:20.121562 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0524 21:05:20.121616 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0524 21:05:20.188121 1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0524 21:05:20.188175 1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0524 21:05:20.287601 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0524 21:05:20.287731 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
I0524 21:05:23.377756 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Wed 2023-05-24 21:04:45 UTC, ends at Wed 2023-05-24 21:08:12 UTC. --
May 24 21:05:38 multinode-935345 kubelet[2361]: I0524 21:05:38.672731 2361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7fa230e07028274a45372cbe43fdbf0029a99532f0c72121b3962a91a63aaa3"
May 24 21:05:39 multinode-935345 kubelet[2361]: I0524 21:05:39.701369 2361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j5gdf" podStartSLOduration=5.70125225 podCreationTimestamp="2023-05-24 21:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 21:05:39.700675876 +0000 UTC m=+17.755758060" watchObservedRunningTime="2023-05-24 21:05:39.70125225 +0000 UTC m=+17.756334432"
May 24 21:05:43 multinode-935345 kubelet[2361]: I0524 21:05:43.810686 2361 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
May 24 21:05:43 multinode-935345 kubelet[2361]: I0524 21:05:43.843723 2361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lkcmf" podStartSLOduration=7.220174945 podCreationTimestamp="2023-05-24 21:05:34 +0000 UTC" firstStartedPulling="2023-05-24 21:05:38.674946183 +0000 UTC m=+16.730028355" lastFinishedPulling="2023-05-24 21:05:41.298458004 +0000 UTC m=+19.353540174" observedRunningTime="2023-05-24 21:05:42.77519499 +0000 UTC m=+20.830277180" watchObservedRunningTime="2023-05-24 21:05:43.843686764 +0000 UTC m=+21.898768955"
May 24 21:05:43 multinode-935345 kubelet[2361]: I0524 21:05:43.844022 2361 topology_manager.go:212] "Topology Admit Handler"
May 24 21:05:43 multinode-935345 kubelet[2361]: I0524 21:05:43.848805 2361 topology_manager.go:212] "Topology Admit Handler"
May 24 21:05:43 multinode-935345 kubelet[2361]: I0524 21:05:43.872448 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96aeb17f-d77a-4748-a3fb-a5f21e810413-config-volume\") pod \"coredns-5d78c9869d-b58rt\" (UID: \"96aeb17f-d77a-4748-a3fb-a5f21e810413\") " pod="kube-system/coredns-5d78c9869d-b58rt"
May 24 21:05:43 multinode-935345 kubelet[2361]: I0524 21:05:43.872524 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0cf76689-90ee-4e10-80d0-67519768f5a1-tmp\") pod \"storage-provisioner\" (UID: \"0cf76689-90ee-4e10-80d0-67519768f5a1\") " pod="kube-system/storage-provisioner"
May 24 21:05:43 multinode-935345 kubelet[2361]: I0524 21:05:43.872565 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdv49\" (UniqueName: \"kubernetes.io/projected/96aeb17f-d77a-4748-a3fb-a5f21e810413-kube-api-access-bdv49\") pod \"coredns-5d78c9869d-b58rt\" (UID: \"96aeb17f-d77a-4748-a3fb-a5f21e810413\") " pod="kube-system/coredns-5d78c9869d-b58rt"
May 24 21:05:43 multinode-935345 kubelet[2361]: I0524 21:05:43.872584 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgdw\" (UniqueName: \"kubernetes.io/projected/0cf76689-90ee-4e10-80d0-67519768f5a1-kube-api-access-lhgdw\") pod \"storage-provisioner\" (UID: \"0cf76689-90ee-4e10-80d0-67519768f5a1\") " pod="kube-system/storage-provisioner"
May 24 21:05:44 multinode-935345 kubelet[2361]: I0524 21:05:44.868256 2361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b9f330af55490ac0ebc4e21c58461ba111a1a21e931ec9f3de2ba5c5a31f913"
May 24 21:05:45 multinode-935345 kubelet[2361]: I0524 21:05:45.135633 2361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f4cb5c83eefb0a29ce417a61a960257d5fe9204bb4b7049f1025307c5080bd4"
May 24 21:05:46 multinode-935345 kubelet[2361]: I0524 21:05:46.173198 2361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.172919171 podCreationTimestamp="2023-05-24 21:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 21:05:46.172782732 +0000 UTC m=+24.227864906" watchObservedRunningTime="2023-05-24 21:05:46.172919171 +0000 UTC m=+24.228001361"
May 24 21:06:22 multinode-935345 kubelet[2361]: E0524 21:06:22.443195 2361 iptables.go:575] "Could not set up iptables canary" err=<
May 24 21:06:22 multinode-935345 kubelet[2361]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
May 24 21:06:22 multinode-935345 kubelet[2361]: Perhaps ip6tables or your kernel needs to be upgraded.
May 24 21:06:22 multinode-935345 kubelet[2361]: > table=nat chain=KUBE-KUBELET-CANARY
May 24 21:06:40 multinode-935345 kubelet[2361]: I0524 21:06:40.421255 2361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-b58rt" podStartSLOduration=66.421176466 podCreationTimestamp="2023-05-24 21:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 21:05:46.200670744 +0000 UTC m=+24.255752934" watchObservedRunningTime="2023-05-24 21:06:40.421176466 +0000 UTC m=+78.476258662"
May 24 21:06:40 multinode-935345 kubelet[2361]: I0524 21:06:40.423180 2361 topology_manager.go:212] "Topology Admit Handler"
May 24 21:06:40 multinode-935345 kubelet[2361]: I0524 21:06:40.518892 2361 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8m7\" (UniqueName: \"kubernetes.io/projected/a038c8ec-384e-47f0-8d3c-5cda32210972-kube-api-access-mh8m7\") pod \"busybox-67b7f59bb-w5dwz\" (UID: \"a038c8ec-384e-47f0-8d3c-5cda32210972\") " pod="default/busybox-67b7f59bb-w5dwz"
May 24 21:06:42 multinode-935345 kubelet[2361]: I0524 21:06:42.659983 2361 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-w5dwz" podStartSLOduration=1.643893351 podCreationTimestamp="2023-05-24 21:06:40 +0000 UTC" firstStartedPulling="2023-05-24 21:06:41.347986615 +0000 UTC m=+79.403068788" lastFinishedPulling="2023-05-24 21:06:42.363961631 +0000 UTC m=+80.419043802" observedRunningTime="2023-05-24 21:06:42.659159723 +0000 UTC m=+80.714241914" watchObservedRunningTime="2023-05-24 21:06:42.659868365 +0000 UTC m=+80.714950555"
May 24 21:07:22 multinode-935345 kubelet[2361]: E0524 21:07:22.442641 2361 iptables.go:575] "Could not set up iptables canary" err=<
May 24 21:07:22 multinode-935345 kubelet[2361]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
May 24 21:07:22 multinode-935345 kubelet[2361]: Perhaps ip6tables or your kernel needs to be upgraded.
May 24 21:07:22 multinode-935345 kubelet[2361]: > table=nat chain=KUBE-KUBELET-CANARY
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-935345 -n multinode-935345
helpers_test.go:261: (dbg) Run: kubectl --context multinode-935345 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (20.71s)