=== RUN TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run: out/minikube-linux-amd64 -p multinode-415589 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415589 node start m03 --alsologtostderr: exit status 90 (19.068575762s)
-- stdout --
* Starting worker node multinode-415589-m03 in cluster multinode-415589
* Restarting existing kvm2 VM for "multinode-415589-m03" ...
-- /stdout --
** stderr **
I0919 16:56:39.199901 87826 out.go:296] Setting OutFile to fd 1 ...
I0919 16:56:39.200157 87826 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:56:39.200167 87826 out.go:309] Setting ErrFile to fd 2...
I0919 16:56:39.200172 87826 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:56:39.200342 87826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:56:39.200603 87826 mustload.go:65] Loading cluster: multinode-415589
I0919 16:56:39.200977 87826 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:56:39.201345 87826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:56:39.201395 87826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:56:39.216195 87826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
I0919 16:56:39.216602 87826 main.go:141] libmachine: () Calling .GetVersion
I0919 16:56:39.217166 87826 main.go:141] libmachine: Using API Version 1
I0919 16:56:39.217198 87826 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:56:39.217567 87826 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:56:39.217779 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
W0919 16:56:39.219568 87826 host.go:58] "multinode-415589-m03" host status: Stopped
I0919 16:56:39.221900 87826 out.go:177] * Starting worker node multinode-415589-m03 in cluster multinode-415589
I0919 16:56:39.223373 87826 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0919 16:56:39.223648 87826 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
I0919 16:56:39.223736 87826 cache.go:57] Caching tarball of preloaded images
I0919 16:56:39.223837 87826 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0919 16:56:39.223853 87826 cache.go:60] Finished verifying existence of preloaded tar for v1.28.2 on docker
I0919 16:56:39.224110 87826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:56:39.224375 87826 start.go:365] acquiring machines lock for multinode-415589-m03: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0919 16:56:39.224441 87826 start.go:369] acquired machines lock for "multinode-415589-m03" in 26.176µs
I0919 16:56:39.224467 87826 start.go:96] Skipping create...Using existing machine configuration
I0919 16:56:39.224481 87826 fix.go:54] fixHost starting: m03
I0919 16:56:39.225116 87826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:56:39.225156 87826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:56:39.239936 87826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
I0919 16:56:39.240271 87826 main.go:141] libmachine: () Calling .GetVersion
I0919 16:56:39.240686 87826 main.go:141] libmachine: Using API Version 1
I0919 16:56:39.240710 87826 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:56:39.241013 87826 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:56:39.241210 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:39.241372 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
I0919 16:56:39.242774 87826 fix.go:102] recreateIfNeeded on multinode-415589-m03: state=Stopped err=<nil>
I0919 16:56:39.242802 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
W0919 16:56:39.242979 87826 fix.go:128] unexpected machine state, will restart: <nil>
I0919 16:56:39.244671 87826 out.go:177] * Restarting existing kvm2 VM for "multinode-415589-m03" ...
I0919 16:56:39.245824 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .Start
I0919 16:56:39.246012 87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring networks are active...
I0919 16:56:39.246675 87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring network default is active
I0919 16:56:39.247090 87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring network mk-multinode-415589 is active
I0919 16:56:39.247382 87826 main.go:141] libmachine: (multinode-415589-m03) Getting domain xml...
I0919 16:56:39.247957 87826 main.go:141] libmachine: (multinode-415589-m03) Creating domain...
I0919 16:56:40.483150 87826 main.go:141] libmachine: (multinode-415589-m03) Waiting to get IP...
I0919 16:56:40.484175 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.484612 87826 main.go:141] libmachine: (multinode-415589-m03) Found IP for machine: 192.168.50.209
I0919 16:56:40.484649 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has current primary IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.484685 87826 main.go:141] libmachine: (multinode-415589-m03) Reserving static IP address...
I0919 16:56:40.485247 87826 main.go:141] libmachine: (multinode-415589-m03) Reserved static IP address: 192.168.50.209
I0919 16:56:40.485289 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "multinode-415589-m03", mac: "52:54:00:7a:de:cd", ip: "192.168.50.209"} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:55:59 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:40.485314 87826 main.go:141] libmachine: (multinode-415589-m03) Waiting for SSH to be available...
I0919 16:56:40.485346 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | skip adding static IP to network mk-multinode-415589 - found existing host DHCP lease matching {name: "multinode-415589-m03", mac: "52:54:00:7a:de:cd", ip: "192.168.50.209"}
I0919 16:56:40.485363 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Getting to WaitForSSH function...
I0919 16:56:40.487934 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.488393 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:55:59 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:40.488436 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.488641 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Using SSH client type: external
I0919 16:56:40.488682 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa (-rw-------)
I0919 16:56:40.488720 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0919 16:56:40.488740 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | About to run SSH command:
I0919 16:56:40.488755 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | exit 0
I0919 16:56:53.613468 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | SSH cmd err, output: <nil>:
I0919 16:56:53.613856 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetConfigRaw
I0919 16:56:53.614493 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:53.616937 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.617401 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.617436 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.617724 87826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:56:53.617906 87826 machine.go:88] provisioning docker machine ...
I0919 16:56:53.617923 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:53.618135 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.618306 87826 buildroot.go:166] provisioning hostname "multinode-415589-m03"
I0919 16:56:53.618322 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.618468 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.620497 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.620805 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.620859 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.620984 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.621153 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.621331 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.621461 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.621665 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:53.622159 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:53.622182 87826 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-415589-m03 && echo "multinode-415589-m03" | sudo tee /etc/hostname
I0919 16:56:53.745954 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-415589-m03
I0919 16:56:53.745999 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.748693 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.749081 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.749131 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.749287 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.749503 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.749674 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.749823 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.749982 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:53.750294 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:53.750312 87826 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-415589-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-415589-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-415589-m03' | sudo tee -a /etc/hosts;
fi
fi
I0919 16:56:53.871256 87826 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0919 16:56:53.871299 87826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
I0919 16:56:53.871332 87826 buildroot.go:174] setting up certificates
I0919 16:56:53.871346 87826 provision.go:83] configureAuth start
I0919 16:56:53.871365 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.871708 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:53.874009 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.874436 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.874468 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.874575 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.876929 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.877341 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.877370 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.877499 87826 provision.go:138] copyHostCerts
I0919 16:56:53.877561 87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
I0919 16:56:53.877571 87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
I0919 16:56:53.877663 87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
I0919 16:56:53.877750 87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
I0919 16:56:53.877758 87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
I0919 16:56:53.877782 87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
I0919 16:56:53.877844 87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
I0919 16:56:53.877851 87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
I0919 16:56:53.877871 87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
I0919 16:56:53.877923 87826 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.multinode-415589-m03 san=[192.168.50.209 192.168.50.209 localhost 127.0.0.1 minikube multinode-415589-m03]
I0919 16:56:53.962274 87826 provision.go:172] copyRemoteCerts
I0919 16:56:53.962335 87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0919 16:56:53.962360 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.965106 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.965469 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.965508 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.965637 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.965819 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.965980 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.966159 87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:54.050135 87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0919 16:56:54.072583 87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0919 16:56:54.093867 87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0919 16:56:54.119558 87826 provision.go:86] duration metric: configureAuth took 248.195368ms
I0919 16:56:54.119582 87826 buildroot.go:189] setting minikube options for container-runtime
I0919 16:56:54.119795 87826 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:56:54.119847 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:54.120138 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.122462 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.122807 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.122857 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.122964 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.123158 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.123316 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.123476 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.123656 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.123955 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.123968 87826 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0919 16:56:54.235038 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0919 16:56:54.235061 87826 buildroot.go:70] root file system type: tmpfs
I0919 16:56:54.235224 87826 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0919 16:56:54.235258 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.237841 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.238227 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.238265 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.238445 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.238630 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.238821 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.238942 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.239160 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.239526 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.239608 87826 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0919 16:56:54.362965 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0919 16:56:54.363002 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.365649 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.366013 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.366040 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.366202 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.366423 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.366593 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.366750 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.366961 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.367396 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.367419 87826 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0919 16:56:55.217276 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0919 16:56:55.217309 87826 machine.go:91] provisioned docker machine in 1.599388316s
I0919 16:56:55.217324 87826 start.go:300] post-start starting for "multinode-415589-m03" (driver="kvm2")
I0919 16:56:55.217338 87826 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0919 16:56:55.217386 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.217780 87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0919 16:56:55.217825 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.220985 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.221442 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.221474 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.221637 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.221837 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.222041 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.222234 87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.308140 87826 ssh_runner.go:195] Run: cat /etc/os-release
I0919 16:56:55.312207 87826 info.go:137] Remote host: Buildroot 2021.02.12
I0919 16:56:55.312232 87826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
I0919 16:56:55.312324 87826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
I0919 16:56:55.312438 87826 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
I0919 16:56:55.312559 87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0919 16:56:55.321552 87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
I0919 16:56:55.343266 87826 start.go:303] post-start completed in 125.926082ms
I0919 16:56:55.343292 87826 fix.go:56] fixHost completed within 16.118813076s
I0919 16:56:55.343314 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.346010 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.346433 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.346468 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.346642 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.346830 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.346967 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.347087 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.347273 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:55.347748 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:55.347764 87826 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0919 16:56:55.458471 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142615.405890302
I0919 16:56:55.458492 87826 fix.go:206] guest clock: 1695142615.405890302
I0919 16:56:55.458500 87826 fix.go:219] Guest: 2023-09-19 16:56:55.405890302 +0000 UTC Remote: 2023-09-19 16:56:55.343296526 +0000 UTC m=+16.174472057 (delta=62.593776ms)
I0919 16:56:55.458536 87826 fix.go:190] guest clock delta is within tolerance: 62.593776ms
I0919 16:56:55.458541 87826 start.go:83] releasing machines lock for "multinode-415589-m03", held for 16.23408758s
I0919 16:56:55.458562 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.458895 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:55.461888 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.462317 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.462352 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.462489 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463238 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463488 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463594 87826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0919 16:56:55.463655 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.463780 87826 ssh_runner.go:195] Run: systemctl --version
I0919 16:56:55.463802 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.466416 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466752 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.466791 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466913 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466943 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.467101 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.467219 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.467350 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.467374 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.467386 87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.467516 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.467651 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.467782 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.467909 87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.552742 87826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0919 16:56:55.580877 87826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0919 16:56:55.581059 87826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0919 16:56:55.599969 87826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0919 16:56:55.599994 87826 start.go:469] detecting cgroup driver to use...
I0919 16:56:55.600169 87826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:56:55.618705 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0919 16:56:55.629933 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0919 16:56:55.641013 87826 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0919 16:56:55.641072 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0919 16:56:55.652627 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:56:55.662867 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0919 16:56:55.672560 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:56:55.682697 87826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0919 16:56:55.693463 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0919 16:56:55.703435 87826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0919 16:56:55.711943 87826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0919 16:56:55.720311 87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:55.826917 87826 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0919 16:56:55.844596 87826 start.go:469] detecting cgroup driver to use...
I0919 16:56:55.844704 87826 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0919 16:56:55.859155 87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:56:55.873010 87826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0919 16:56:55.890737 87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:56:55.903270 87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:56:55.915537 87826 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0919 16:56:55.947328 87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:56:55.960937 87826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:56:55.978060 87826 ssh_runner.go:195] Run: which cri-dockerd
I0919 16:56:55.981872 87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0919 16:56:55.989568 87826 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0919 16:56:56.003670 87826 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0919 16:56:56.112061 87826 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0919 16:56:56.232698 87826 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0919 16:56:56.232733 87826 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0919 16:56:56.249459 87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:56.356638 87826 ssh_runner.go:195] Run: sudo systemctl restart docker
I0919 16:56:57.777045 87826 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.420368194s)
I0919 16:56:57.777131 87826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:56:57.885360 87826 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0919 16:56:57.997961 87826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:56:58.103664 87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:58.204072 87826 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0919 16:56:58.222608 87826 out.go:177]
W0919 16:56:58.223958 87826 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0919 16:56:58.223973 87826 out.go:239] *
*
W0919 16:56:58.227605 87826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0919 16:56:58.229244 87826 out.go:177]
** /stderr **
multinode_test.go:256: I0919 16:56:39.199901 87826 out.go:296] Setting OutFile to fd 1 ...
I0919 16:56:39.200157 87826 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:56:39.200167 87826 out.go:309] Setting ErrFile to fd 2...
I0919 16:56:39.200172 87826 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:56:39.200342 87826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:56:39.200603 87826 mustload.go:65] Loading cluster: multinode-415589
I0919 16:56:39.200977 87826 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:56:39.201345 87826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:56:39.201395 87826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:56:39.216195 87826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
I0919 16:56:39.216602 87826 main.go:141] libmachine: () Calling .GetVersion
I0919 16:56:39.217166 87826 main.go:141] libmachine: Using API Version 1
I0919 16:56:39.217198 87826 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:56:39.217567 87826 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:56:39.217779 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
W0919 16:56:39.219568 87826 host.go:58] "multinode-415589-m03" host status: Stopped
I0919 16:56:39.221900 87826 out.go:177] * Starting worker node multinode-415589-m03 in cluster multinode-415589
I0919 16:56:39.223373 87826 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0919 16:56:39.223648 87826 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
I0919 16:56:39.223736 87826 cache.go:57] Caching tarball of preloaded images
I0919 16:56:39.223837 87826 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0919 16:56:39.223853 87826 cache.go:60] Finished verifying existence of preloaded tar for v1.28.2 on docker
I0919 16:56:39.224110 87826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:56:39.224375 87826 start.go:365] acquiring machines lock for multinode-415589-m03: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0919 16:56:39.224441 87826 start.go:369] acquired machines lock for "multinode-415589-m03" in 26.176µs
I0919 16:56:39.224467 87826 start.go:96] Skipping create...Using existing machine configuration
I0919 16:56:39.224481 87826 fix.go:54] fixHost starting: m03
I0919 16:56:39.225116 87826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:56:39.225156 87826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:56:39.239936 87826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
I0919 16:56:39.240271 87826 main.go:141] libmachine: () Calling .GetVersion
I0919 16:56:39.240686 87826 main.go:141] libmachine: Using API Version 1
I0919 16:56:39.240710 87826 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:56:39.241013 87826 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:56:39.241210 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:39.241372 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
I0919 16:56:39.242774 87826 fix.go:102] recreateIfNeeded on multinode-415589-m03: state=Stopped err=<nil>
I0919 16:56:39.242802 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
W0919 16:56:39.242979 87826 fix.go:128] unexpected machine state, will restart: <nil>
I0919 16:56:39.244671 87826 out.go:177] * Restarting existing kvm2 VM for "multinode-415589-m03" ...
I0919 16:56:39.245824 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .Start
I0919 16:56:39.246012 87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring networks are active...
I0919 16:56:39.246675 87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring network default is active
I0919 16:56:39.247090 87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring network mk-multinode-415589 is active
I0919 16:56:39.247382 87826 main.go:141] libmachine: (multinode-415589-m03) Getting domain xml...
I0919 16:56:39.247957 87826 main.go:141] libmachine: (multinode-415589-m03) Creating domain...
I0919 16:56:40.483150 87826 main.go:141] libmachine: (multinode-415589-m03) Waiting to get IP...
I0919 16:56:40.484175 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.484612 87826 main.go:141] libmachine: (multinode-415589-m03) Found IP for machine: 192.168.50.209
I0919 16:56:40.484649 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has current primary IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.484685 87826 main.go:141] libmachine: (multinode-415589-m03) Reserving static IP address...
I0919 16:56:40.485247 87826 main.go:141] libmachine: (multinode-415589-m03) Reserved static IP address: 192.168.50.209
I0919 16:56:40.485289 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "multinode-415589-m03", mac: "52:54:00:7a:de:cd", ip: "192.168.50.209"} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:55:59 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:40.485314 87826 main.go:141] libmachine: (multinode-415589-m03) Waiting for SSH to be available...
I0919 16:56:40.485346 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | skip adding static IP to network mk-multinode-415589 - found existing host DHCP lease matching {name: "multinode-415589-m03", mac: "52:54:00:7a:de:cd", ip: "192.168.50.209"}
I0919 16:56:40.485363 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Getting to WaitForSSH function...
I0919 16:56:40.487934 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.488393 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:55:59 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:40.488436 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.488641 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Using SSH client type: external
I0919 16:56:40.488682 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa (-rw-------)
I0919 16:56:40.488720 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0919 16:56:40.488740 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | About to run SSH command:
I0919 16:56:40.488755 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | exit 0
I0919 16:56:53.613468 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | SSH cmd err, output: <nil>:
I0919 16:56:53.613856 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetConfigRaw
I0919 16:56:53.614493 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:53.616937 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.617401 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.617436 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.617724 87826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:56:53.617906 87826 machine.go:88] provisioning docker machine ...
I0919 16:56:53.617923 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:53.618135 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.618306 87826 buildroot.go:166] provisioning hostname "multinode-415589-m03"
I0919 16:56:53.618322 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.618468 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.620497 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.620805 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.620859 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.620984 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.621153 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.621331 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.621461 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.621665 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:53.622159 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:53.622182 87826 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-415589-m03 && echo "multinode-415589-m03" | sudo tee /etc/hostname
I0919 16:56:53.745954 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-415589-m03
I0919 16:56:53.745999 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.748693 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.749081 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.749131 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.749287 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.749503 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.749674 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.749823 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.749982 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:53.750294 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:53.750312 87826 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-415589-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-415589-m03/g' /etc/hosts;
else
echo '127.0.1.1 multinode-415589-m03' | sudo tee -a /etc/hosts;
fi
fi
I0919 16:56:53.871256 87826 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0919 16:56:53.871299 87826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
I0919 16:56:53.871332 87826 buildroot.go:174] setting up certificates
I0919 16:56:53.871346 87826 provision.go:83] configureAuth start
I0919 16:56:53.871365 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.871708 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:53.874009 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.874436 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.874468 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.874575 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.876929 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.877341 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.877370 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.877499 87826 provision.go:138] copyHostCerts
I0919 16:56:53.877561 87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
I0919 16:56:53.877571 87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
I0919 16:56:53.877663 87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
I0919 16:56:53.877750 87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
I0919 16:56:53.877758 87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
I0919 16:56:53.877782 87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
I0919 16:56:53.877844 87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
I0919 16:56:53.877851 87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
I0919 16:56:53.877871 87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
I0919 16:56:53.877923 87826 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.multinode-415589-m03 san=[192.168.50.209 192.168.50.209 localhost 127.0.0.1 minikube multinode-415589-m03]
I0919 16:56:53.962274 87826 provision.go:172] copyRemoteCerts
I0919 16:56:53.962335 87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0919 16:56:53.962360 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.965106 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.965469 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.965508 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.965637 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.965819 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.965980 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.966159 87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:54.050135 87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0919 16:56:54.072583 87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0919 16:56:54.093867 87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0919 16:56:54.119558 87826 provision.go:86] duration metric: configureAuth took 248.195368ms
I0919 16:56:54.119582 87826 buildroot.go:189] setting minikube options for container-runtime
I0919 16:56:54.119795 87826 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:56:54.119847 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:54.120138 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.122462 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.122807 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.122857 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.122964 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.123158 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.123316 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.123476 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.123656 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.123955 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.123968 87826 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0919 16:56:54.235038 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0919 16:56:54.235061 87826 buildroot.go:70] root file system type: tmpfs
I0919 16:56:54.235224 87826 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0919 16:56:54.235258 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.237841 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.238227 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.238265 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.238445 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.238630 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.238821 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.238942 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.239160 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.239526 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.239608 87826 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0919 16:56:54.362965 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0919 16:56:54.363002 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.365649 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.366013 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.366040 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.366202 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.366423 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.366593 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.366750 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.366961 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.367396 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.367419 87826 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0919 16:56:55.217276 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0919 16:56:55.217309 87826 machine.go:91] provisioned docker machine in 1.599388316s
I0919 16:56:55.217324 87826 start.go:300] post-start starting for "multinode-415589-m03" (driver="kvm2")
I0919 16:56:55.217338 87826 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0919 16:56:55.217386 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.217780 87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0919 16:56:55.217825 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.220985 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.221442 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.221474 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.221637 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.221837 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.222041 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.222234 87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.308140 87826 ssh_runner.go:195] Run: cat /etc/os-release
I0919 16:56:55.312207 87826 info.go:137] Remote host: Buildroot 2021.02.12
I0919 16:56:55.312232 87826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
I0919 16:56:55.312324 87826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
I0919 16:56:55.312438 87826 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
I0919 16:56:55.312559 87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0919 16:56:55.321552 87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
I0919 16:56:55.343266 87826 start.go:303] post-start completed in 125.926082ms
I0919 16:56:55.343292 87826 fix.go:56] fixHost completed within 16.118813076s
I0919 16:56:55.343314 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.346010 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.346433 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.346468 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.346642 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.346830 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.346967 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.347087 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.347273 87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:55.347748 87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:55.347764 87826 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0919 16:56:55.458471 87826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142615.405890302
I0919 16:56:55.458492 87826 fix.go:206] guest clock: 1695142615.405890302
I0919 16:56:55.458500 87826 fix.go:219] Guest: 2023-09-19 16:56:55.405890302 +0000 UTC Remote: 2023-09-19 16:56:55.343296526 +0000 UTC m=+16.174472057 (delta=62.593776ms)
I0919 16:56:55.458536 87826 fix.go:190] guest clock delta is within tolerance: 62.593776ms
I0919 16:56:55.458541 87826 start.go:83] releasing machines lock for "multinode-415589-m03", held for 16.23408758s
I0919 16:56:55.458562 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.458895 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:55.461888 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.462317 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.462352 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.462489 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463238 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463488 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463594 87826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0919 16:56:55.463655 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.463780 87826 ssh_runner.go:195] Run: systemctl --version
I0919 16:56:55.463802 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.466416 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466752 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.466791 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466913 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466943 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.467101 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.467219 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.467350 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.467374 87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.467386 87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.467516 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.467651 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.467782 87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.467909 87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.552742 87826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0919 16:56:55.580877 87826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0919 16:56:55.581059 87826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0919 16:56:55.599969 87826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0919 16:56:55.599994 87826 start.go:469] detecting cgroup driver to use...
I0919 16:56:55.600169 87826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:56:55.618705 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0919 16:56:55.629933 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0919 16:56:55.641013 87826 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0919 16:56:55.641072 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0919 16:56:55.652627 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:56:55.662867 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0919 16:56:55.672560 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:56:55.682697 87826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0919 16:56:55.693463 87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0919 16:56:55.703435 87826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0919 16:56:55.711943 87826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0919 16:56:55.720311 87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:55.826917 87826 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0919 16:56:55.844596 87826 start.go:469] detecting cgroup driver to use...
I0919 16:56:55.844704 87826 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0919 16:56:55.859155 87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:56:55.873010 87826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0919 16:56:55.890737 87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:56:55.903270 87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:56:55.915537 87826 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0919 16:56:55.947328 87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:56:55.960937 87826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:56:55.978060 87826 ssh_runner.go:195] Run: which cri-dockerd
I0919 16:56:55.981872 87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0919 16:56:55.989568 87826 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0919 16:56:56.003670 87826 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0919 16:56:56.112061 87826 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0919 16:56:56.232698 87826 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0919 16:56:56.232733 87826 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0919 16:56:56.249459 87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:56.356638 87826 ssh_runner.go:195] Run: sudo systemctl restart docker
I0919 16:56:57.777045 87826 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.420368194s)
I0919 16:56:57.777131 87826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:56:57.885360 87826 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0919 16:56:57.997961 87826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:56:58.103664 87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:58.204072 87826 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0919 16:56:58.222608 87826 out.go:177]
W0919 16:56:58.223958 87826 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0919 16:56:58.223973 87826 out.go:239] *
*
W0919 16:56:58.227605 87826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ * Please also attach the following file to the GitHub issue: │
│ * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0919 16:56:58.229244 87826 out.go:177]
multinode_test.go:257: node start returned an error. args "out/minikube-linux-amd64 -p multinode-415589 node start m03 --alsologtostderr": exit status 90
multinode_test.go:261: (dbg) Run: out/minikube-linux-amd64 -p multinode-415589 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415589 status: exit status 2 (574.043943ms)
-- stdout --
multinode-415589
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
multinode-415589-m02
type: Worker
host: Running
kubelet: Running
multinode-415589-m03
type: Worker
host: Running
kubelet: Stopped
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-415589 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-415589 -n multinode-415589
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p multinode-415589 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-415589 logs -n 25: (1.10180353s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
| cp | multinode-415589 cp multinode-415589:/home/docker/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m03:/home/docker/cp-test_multinode-415589_multinode-415589-m03.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-415589 ssh -n multinode-415589-m03 sudo cat | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | /home/docker/cp-test_multinode-415589_multinode-415589-m03.txt | | | | | |
| cp | multinode-415589 cp testdata/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m02:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | /tmp/TestMultiNodeserialCopyFile2979988656/001/cp-test_multinode-415589-m02.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589:/home/docker/cp-test_multinode-415589-m02_multinode-415589.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-415589 ssh -n multinode-415589 sudo cat | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | /home/docker/cp-test_multinode-415589-m02_multinode-415589.txt | | | | | |
| cp | multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m03:/home/docker/cp-test_multinode-415589-m02_multinode-415589-m03.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m02 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-415589 ssh -n multinode-415589-m03 sudo cat | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | /home/docker/cp-test_multinode-415589-m02_multinode-415589-m03.txt | | | | | |
| cp | multinode-415589 cp testdata/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m03:/home/docker/cp-test.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | /tmp/TestMultiNodeserialCopyFile2979988656/001/cp-test_multinode-415589-m03.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589:/home/docker/cp-test_multinode-415589-m03_multinode-415589.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-415589 ssh -n multinode-415589 sudo cat | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | /home/docker/cp-test_multinode-415589-m03_multinode-415589.txt | | | | | |
| cp | multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m02:/home/docker/cp-test_multinode-415589-m03_multinode-415589-m02.txt | | | | | |
| ssh | multinode-415589 ssh -n | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | multinode-415589-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-415589 ssh -n multinode-415589-m02 sudo cat | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| | /home/docker/cp-test_multinode-415589-m03_multinode-415589-m02.txt | | | | | |
| node | multinode-415589 node stop m03 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
| node | multinode-415589 node start | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | |
| | m03 --alsologtostderr | | | | | |
|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/09/19 16:53:23
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.21.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0919 16:53:23.713976 85253 out.go:296] Setting OutFile to fd 1 ...
I0919 16:53:23.714258 85253 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:53:23.714268 85253 out.go:309] Setting ErrFile to fd 2...
I0919 16:53:23.714276 85253 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:53:23.714513 85253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:53:23.715102 85253 out.go:303] Setting JSON to false
I0919 16:53:23.716008 85253 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5517,"bootTime":1695136887,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0919 16:53:23.716066 85253 start.go:138] virtualization: kvm guest
I0919 16:53:23.718848 85253 out.go:177] * [multinode-415589] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0919 16:53:23.720695 85253 notify.go:220] Checking for updates...
I0919 16:53:23.720705 85253 out.go:177] - MINIKUBE_LOCATION=17240
I0919 16:53:23.722480 85253 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0919 16:53:23.724037 85253 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
I0919 16:53:23.725431 85253 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
I0919 16:53:23.726676 85253 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0919 16:53:23.727940 85253 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0919 16:53:23.729336 85253 driver.go:373] Setting default libvirt URI to qemu:///system
I0919 16:53:23.764031 85253 out.go:177] * Using the kvm2 driver based on user configuration
I0919 16:53:23.765335 85253 start.go:298] selected driver: kvm2
I0919 16:53:23.765351 85253 start.go:902] validating driver "kvm2" against <nil>
I0919 16:53:23.765365 85253 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0919 16:53:23.766091 85253 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 16:53:23.766179 85253 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-65689/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 16:53:23.780403 85253 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I0919 16:53:23.780470 85253 start_flags.go:307] no existing cluster config was found, will generate one from the flags
I0919 16:53:23.780799 85253 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0919 16:53:23.780844 85253 cni.go:84] Creating CNI manager for ""
I0919 16:53:23.780858 85253 cni.go:136] 0 nodes found, recommending kindnet
I0919 16:53:23.780868 85253 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
I0919 16:53:23.780884 85253 start_flags.go:321] config:
{Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0919 16:53:23.781058 85253 iso.go:125] acquiring lock: {Name:mkdf0d42546c83faf1a624ccdb8d9876db7a1a92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 16:53:23.783366 85253 out.go:177] * Starting control plane node multinode-415589 in cluster multinode-415589
I0919 16:53:23.785163 85253 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0919 16:53:23.785194 85253 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
I0919 16:53:23.785201 85253 cache.go:57] Caching tarball of preloaded images
I0919 16:53:23.785300 85253 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0919 16:53:23.785311 85253 cache.go:60] Finished verifying existence of preloaded tar for v1.28.2 on docker
I0919 16:53:23.786488 85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:53:23.786551 85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json: {Name:mk76d9cce25713484142aeb499f9fb85a87b44c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:53:23.786965 85253 start.go:365] acquiring machines lock for multinode-415589: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0919 16:53:23.787014 85253 start.go:369] acquired machines lock for "multinode-415589" in 27.275µs
I0919 16:53:23.787037 85253 start.go:93] Provisioning new machine with config: &{Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0919 16:53:23.787141 85253 start.go:125] createHost starting for "" (driver="kvm2")
I0919 16:53:23.788733 85253 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0919 16:53:23.788876 85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:53:23.788930 85253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:53:23.802270 85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40445
I0919 16:53:23.802693 85253 main.go:141] libmachine: () Calling .GetVersion
I0919 16:53:23.803288 85253 main.go:141] libmachine: Using API Version 1
I0919 16:53:23.803309 85253 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:53:23.803609 85253 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:53:23.803768 85253 main.go:141] libmachine: (multinode-415589) Calling .GetMachineName
I0919 16:53:23.803890 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:23.804020 85253 start.go:159] libmachine.API.Create for "multinode-415589" (driver="kvm2")
I0919 16:53:23.804049 85253 client.go:168] LocalClient.Create starting
I0919 16:53:23.804080 85253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem
I0919 16:53:23.804113 85253 main.go:141] libmachine: Decoding PEM data...
I0919 16:53:23.804128 85253 main.go:141] libmachine: Parsing certificate...
I0919 16:53:23.804178 85253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem
I0919 16:53:23.804197 85253 main.go:141] libmachine: Decoding PEM data...
I0919 16:53:23.804212 85253 main.go:141] libmachine: Parsing certificate...
I0919 16:53:23.804229 85253 main.go:141] libmachine: Running pre-create checks...
I0919 16:53:23.804239 85253 main.go:141] libmachine: (multinode-415589) Calling .PreCreateCheck
I0919 16:53:23.804541 85253 main.go:141] libmachine: (multinode-415589) Calling .GetConfigRaw
I0919 16:53:23.804879 85253 main.go:141] libmachine: Creating machine...
I0919 16:53:23.804893 85253 main.go:141] libmachine: (multinode-415589) Calling .Create
I0919 16:53:23.805014 85253 main.go:141] libmachine: (multinode-415589) Creating KVM machine...
I0919 16:53:23.806092 85253 main.go:141] libmachine: (multinode-415589) DBG | found existing default KVM network
I0919 16:53:23.806740 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:23.806613 85275 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d8:d6:c0} reservation:<nil>}
I0919 16:53:23.807272 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:23.807204 85275 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f8e0}
I0919 16:53:23.812325 85253 main.go:141] libmachine: (multinode-415589) DBG | trying to create private KVM network mk-multinode-415589 192.168.50.0/24...
I0919 16:53:23.882095 85253 main.go:141] libmachine: (multinode-415589) DBG | private KVM network mk-multinode-415589 192.168.50.0/24 created
I0919 16:53:23.882150 85253 main.go:141] libmachine: (multinode-415589) Setting up store path in /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589 ...
I0919 16:53:23.882167 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:23.882055 85275 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-65689/.minikube
I0919 16:53:23.882189 85253 main.go:141] libmachine: (multinode-415589) Building disk image from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
I0919 16:53:23.882213 85253 main.go:141] libmachine: (multinode-415589) Downloading /home/jenkins/minikube-integration/17240-65689/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
I0919 16:53:24.095846 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:24.095706 85275 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa...
I0919 16:53:24.564281 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:24.564126 85275 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/multinode-415589.rawdisk...
I0919 16:53:24.564323 85253 main.go:141] libmachine: (multinode-415589) DBG | Writing magic tar header
I0919 16:53:24.564337 85253 main.go:141] libmachine: (multinode-415589) DBG | Writing SSH key tar header
I0919 16:53:24.564353 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:24.564257 85275 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589 ...
I0919 16:53:24.564393 85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589
I0919 16:53:24.564410 85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines
I0919 16:53:24.564419 85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589 (perms=drwx------)
I0919 16:53:24.564429 85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube
I0919 16:53:24.564444 85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines (perms=drwxr-xr-x)
I0919 16:53:24.564462 85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689
I0919 16:53:24.564478 85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0919 16:53:24.564489 85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins
I0919 16:53:24.564503 85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home
I0919 16:53:24.564514 85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube (perms=drwxr-xr-x)
I0919 16:53:24.564525 85253 main.go:141] libmachine: (multinode-415589) DBG | Skipping /home - not owner
I0919 16:53:24.564541 85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689 (perms=drwxrwxr-x)
I0919 16:53:24.564557 85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0919 16:53:24.564572 85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0919 16:53:24.564587 85253 main.go:141] libmachine: (multinode-415589) Creating domain...
I0919 16:53:24.565801 85253 main.go:141] libmachine: (multinode-415589) define libvirt domain using xml:
I0919 16:53:24.565839 85253 main.go:141] libmachine: (multinode-415589) <domain type='kvm'>
I0919 16:53:24.565849 85253 main.go:141] libmachine: (multinode-415589) <name>multinode-415589</name>
I0919 16:53:24.565860 85253 main.go:141] libmachine: (multinode-415589) <memory unit='MiB'>2200</memory>
I0919 16:53:24.565869 85253 main.go:141] libmachine: (multinode-415589) <vcpu>2</vcpu>
I0919 16:53:24.565874 85253 main.go:141] libmachine: (multinode-415589) <features>
I0919 16:53:24.565881 85253 main.go:141] libmachine: (multinode-415589) <acpi/>
I0919 16:53:24.565886 85253 main.go:141] libmachine: (multinode-415589) <apic/>
I0919 16:53:24.565894 85253 main.go:141] libmachine: (multinode-415589) <pae/>
I0919 16:53:24.565902 85253 main.go:141] libmachine: (multinode-415589)
I0919 16:53:24.565911 85253 main.go:141] libmachine: (multinode-415589) </features>
I0919 16:53:24.565917 85253 main.go:141] libmachine: (multinode-415589) <cpu mode='host-passthrough'>
I0919 16:53:24.565925 85253 main.go:141] libmachine: (multinode-415589)
I0919 16:53:24.565930 85253 main.go:141] libmachine: (multinode-415589) </cpu>
I0919 16:53:24.565970 85253 main.go:141] libmachine: (multinode-415589) <os>
I0919 16:53:24.565995 85253 main.go:141] libmachine: (multinode-415589) <type>hvm</type>
I0919 16:53:24.566018 85253 main.go:141] libmachine: (multinode-415589) <boot dev='cdrom'/>
I0919 16:53:24.566033 85253 main.go:141] libmachine: (multinode-415589) <boot dev='hd'/>
I0919 16:53:24.566047 85253 main.go:141] libmachine: (multinode-415589) <bootmenu enable='no'/>
I0919 16:53:24.566059 85253 main.go:141] libmachine: (multinode-415589) </os>
I0919 16:53:24.566072 85253 main.go:141] libmachine: (multinode-415589) <devices>
I0919 16:53:24.566087 85253 main.go:141] libmachine: (multinode-415589) <disk type='file' device='cdrom'>
I0919 16:53:24.566105 85253 main.go:141] libmachine: (multinode-415589) <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/boot2docker.iso'/>
I0919 16:53:24.566120 85253 main.go:141] libmachine: (multinode-415589) <target dev='hdc' bus='scsi'/>
I0919 16:53:24.566133 85253 main.go:141] libmachine: (multinode-415589) <readonly/>
I0919 16:53:24.566150 85253 main.go:141] libmachine: (multinode-415589) </disk>
I0919 16:53:24.566165 85253 main.go:141] libmachine: (multinode-415589) <disk type='file' device='disk'>
I0919 16:53:24.566180 85253 main.go:141] libmachine: (multinode-415589) <driver name='qemu' type='raw' cache='default' io='threads' />
I0919 16:53:24.566198 85253 main.go:141] libmachine: (multinode-415589) <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/multinode-415589.rawdisk'/>
I0919 16:53:24.566213 85253 main.go:141] libmachine: (multinode-415589) <target dev='hda' bus='virtio'/>
I0919 16:53:24.566234 85253 main.go:141] libmachine: (multinode-415589) </disk>
I0919 16:53:24.566253 85253 main.go:141] libmachine: (multinode-415589) <interface type='network'>
I0919 16:53:24.566263 85253 main.go:141] libmachine: (multinode-415589) <source network='mk-multinode-415589'/>
I0919 16:53:24.566269 85253 main.go:141] libmachine: (multinode-415589) <model type='virtio'/>
I0919 16:53:24.566278 85253 main.go:141] libmachine: (multinode-415589) </interface>
I0919 16:53:24.566283 85253 main.go:141] libmachine: (multinode-415589) <interface type='network'>
I0919 16:53:24.566292 85253 main.go:141] libmachine: (multinode-415589) <source network='default'/>
I0919 16:53:24.566298 85253 main.go:141] libmachine: (multinode-415589) <model type='virtio'/>
I0919 16:53:24.566306 85253 main.go:141] libmachine: (multinode-415589) </interface>
I0919 16:53:24.566316 85253 main.go:141] libmachine: (multinode-415589) <serial type='pty'>
I0919 16:53:24.566330 85253 main.go:141] libmachine: (multinode-415589) <target port='0'/>
I0919 16:53:24.566343 85253 main.go:141] libmachine: (multinode-415589) </serial>
I0919 16:53:24.566351 85253 main.go:141] libmachine: (multinode-415589) <console type='pty'>
I0919 16:53:24.566362 85253 main.go:141] libmachine: (multinode-415589) <target type='serial' port='0'/>
I0919 16:53:24.566371 85253 main.go:141] libmachine: (multinode-415589) </console>
I0919 16:53:24.566377 85253 main.go:141] libmachine: (multinode-415589) <rng model='virtio'>
I0919 16:53:24.566384 85253 main.go:141] libmachine: (multinode-415589) <backend model='random'>/dev/random</backend>
I0919 16:53:24.566394 85253 main.go:141] libmachine: (multinode-415589) </rng>
I0919 16:53:24.566400 85253 main.go:141] libmachine: (multinode-415589)
I0919 16:53:24.566405 85253 main.go:141] libmachine: (multinode-415589)
I0919 16:53:24.566411 85253 main.go:141] libmachine: (multinode-415589) </devices>
I0919 16:53:24.566418 85253 main.go:141] libmachine: (multinode-415589) </domain>
I0919 16:53:24.566427 85253 main.go:141] libmachine: (multinode-415589)
I0919 16:53:24.570337 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:34:f3:25 in network default
I0919 16:53:24.570941 85253 main.go:141] libmachine: (multinode-415589) Ensuring networks are active...
I0919 16:53:24.570966 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:24.571696 85253 main.go:141] libmachine: (multinode-415589) Ensuring network default is active
I0919 16:53:24.572035 85253 main.go:141] libmachine: (multinode-415589) Ensuring network mk-multinode-415589 is active
I0919 16:53:24.572581 85253 main.go:141] libmachine: (multinode-415589) Getting domain xml...
I0919 16:53:24.573336 85253 main.go:141] libmachine: (multinode-415589) Creating domain...
I0919 16:53:25.782983 85253 main.go:141] libmachine: (multinode-415589) Waiting to get IP...
I0919 16:53:25.783879 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:25.784331 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:25.784366 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:25.784312 85275 retry.go:31] will retry after 252.974185ms: waiting for machine to come up
I0919 16:53:26.038922 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:26.039386 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:26.039414 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:26.039308 85275 retry.go:31] will retry after 358.552851ms: waiting for machine to come up
I0919 16:53:26.399726 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:26.400173 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:26.400216 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:26.400122 85275 retry.go:31] will retry after 311.756361ms: waiting for machine to come up
I0919 16:53:26.713663 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:26.714166 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:26.714189 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:26.714114 85275 retry.go:31] will retry after 503.231809ms: waiting for machine to come up
I0919 16:53:27.218721 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:27.219145 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:27.219193 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:27.219087 85275 retry.go:31] will retry after 722.334547ms: waiting for machine to come up
I0919 16:53:27.942991 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:27.943444 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:27.943484 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:27.943402 85275 retry.go:31] will retry after 906.092251ms: waiting for machine to come up
I0919 16:53:28.850606 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:28.850997 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:28.851055 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:28.850978 85275 retry.go:31] will retry after 993.305084ms: waiting for machine to come up
I0919 16:53:29.846159 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:29.846687 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:29.846720 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:29.846589 85275 retry.go:31] will retry after 1.181964129s: waiting for machine to come up
I0919 16:53:31.030026 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:31.030546 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:31.030580 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:31.030471 85275 retry.go:31] will retry after 1.503627047s: waiting for machine to come up
I0919 16:53:32.536090 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:32.536662 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:32.536687 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:32.536601 85275 retry.go:31] will retry after 2.132959485s: waiting for machine to come up
I0919 16:53:34.671533 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:34.672140 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:34.672180 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:34.672088 85275 retry.go:31] will retry after 1.835249108s: waiting for machine to come up
I0919 16:53:36.510708 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:36.511209 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:36.511239 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:36.511191 85275 retry.go:31] will retry after 2.854076315s: waiting for machine to come up
I0919 16:53:39.366850 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:39.367241 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:39.367283 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:39.367193 85275 retry.go:31] will retry after 2.736485042s: waiting for machine to come up
I0919 16:53:42.107079 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:42.107489 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
I0919 16:53:42.107515 85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:42.107430 85275 retry.go:31] will retry after 3.431002257s: waiting for machine to come up
I0919 16:53:45.540721 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.541204 85253 main.go:141] libmachine: (multinode-415589) Found IP for machine: 192.168.50.11
I0919 16:53:45.541222 85253 main.go:141] libmachine: (multinode-415589) Reserving static IP address...
I0919 16:53:45.541232 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has current primary IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.541644 85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find host DHCP lease matching {name: "multinode-415589", mac: "52:54:00:a4:6c:54", ip: "192.168.50.11"} in network mk-multinode-415589
I0919 16:53:45.612920 85253 main.go:141] libmachine: (multinode-415589) DBG | Getting to WaitForSSH function...
I0919 16:53:45.612959 85253 main.go:141] libmachine: (multinode-415589) Reserved static IP address: 192.168.50.11
I0919 16:53:45.613017 85253 main.go:141] libmachine: (multinode-415589) Waiting for SSH to be available...
I0919 16:53:45.615527 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.615904 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:45.615948 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.616103 85253 main.go:141] libmachine: (multinode-415589) DBG | Using SSH client type: external
I0919 16:53:45.616148 85253 main.go:141] libmachine: (multinode-415589) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa (-rw-------)
I0919 16:53:45.616196 85253 main.go:141] libmachine: (multinode-415589) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa -p 22] /usr/bin/ssh <nil>}
I0919 16:53:45.616219 85253 main.go:141] libmachine: (multinode-415589) DBG | About to run SSH command:
I0919 16:53:45.616239 85253 main.go:141] libmachine: (multinode-415589) DBG | exit 0
I0919 16:53:45.713404 85253 main.go:141] libmachine: (multinode-415589) DBG | SSH cmd err, output: <nil>:
I0919 16:53:45.713684 85253 main.go:141] libmachine: (multinode-415589) KVM machine creation complete!
I0919 16:53:45.713939 85253 main.go:141] libmachine: (multinode-415589) Calling .GetConfigRaw
I0919 16:53:45.714622 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:45.714861 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:45.715018 85253 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0919 16:53:45.715036 85253 main.go:141] libmachine: (multinode-415589) Calling .GetState
I0919 16:53:45.716280 85253 main.go:141] libmachine: Detecting operating system of created instance...
I0919 16:53:45.716327 85253 main.go:141] libmachine: Waiting for SSH to be available...
I0919 16:53:45.716334 85253 main.go:141] libmachine: Getting to WaitForSSH function...
I0919 16:53:45.716341 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:45.718601 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.718916 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:45.718942 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.719071 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:45.719260 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:45.719405 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:45.719528 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:45.719685 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:53:45.720119 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.11 22 <nil> <nil>}
I0919 16:53:45.720137 85253 main.go:141] libmachine: About to run SSH command:
exit 0
I0919 16:53:45.848916 85253 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0919 16:53:45.848937 85253 main.go:141] libmachine: Detecting the provisioner...
I0919 16:53:45.848945 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:45.851880 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.852261 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:45.852302 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.852488 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:45.852694 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:45.852886 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:45.853072 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:45.853259 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:53:45.853760 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.11 22 <nil> <nil>}
I0919 16:53:45.853776 85253 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0919 16:53:45.982231 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-gb090841-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0919 16:53:45.982317 85253 main.go:141] libmachine: found compatible host: buildroot
I0919 16:53:45.982330 85253 main.go:141] libmachine: Provisioning with buildroot...
I0919 16:53:45.982340 85253 main.go:141] libmachine: (multinode-415589) Calling .GetMachineName
I0919 16:53:45.982612 85253 buildroot.go:166] provisioning hostname "multinode-415589"
I0919 16:53:45.982635 85253 main.go:141] libmachine: (multinode-415589) Calling .GetMachineName
I0919 16:53:45.982835 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:45.985679 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.986006 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:45.986027 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:45.986340 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:45.986550 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:45.986740 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:45.986918 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:45.987151 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:53:45.987472 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.11 22 <nil> <nil>}
I0919 16:53:45.987487 85253 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-415589 && echo "multinode-415589" | sudo tee /etc/hostname
I0919 16:53:46.130414 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-415589
I0919 16:53:46.130455 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:46.133233 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.133645 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:46.133682 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.133829 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:46.134026 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.134189 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.134342 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:46.134511 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:53:46.134836 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.11 22 <nil> <nil>}
I0919 16:53:46.134853 85253 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-415589' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-415589/g' /etc/hosts;
else
echo '127.0.1.1 multinode-415589' | sudo tee -a /etc/hosts;
fi
fi
I0919 16:53:46.272842 85253 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0919 16:53:46.272872 85253 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
I0919 16:53:46.272921 85253 buildroot.go:174] setting up certificates
I0919 16:53:46.272944 85253 provision.go:83] configureAuth start
I0919 16:53:46.272972 85253 main.go:141] libmachine: (multinode-415589) Calling .GetMachineName
I0919 16:53:46.273307 85253 main.go:141] libmachine: (multinode-415589) Calling .GetIP
I0919 16:53:46.275860 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.276232 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:46.276288 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.276389 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:46.278401 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.278721 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:46.278754 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.278874 85253 provision.go:138] copyHostCerts
I0919 16:53:46.278907 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
I0919 16:53:46.278969 85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
I0919 16:53:46.278981 85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
I0919 16:53:46.279043 85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
I0919 16:53:46.279149 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
I0919 16:53:46.279176 85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
I0919 16:53:46.279183 85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
I0919 16:53:46.279218 85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
I0919 16:53:46.279295 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
I0919 16:53:46.279316 85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
I0919 16:53:46.279323 85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
I0919 16:53:46.279350 85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
I0919 16:53:46.279411 85253 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.multinode-415589 san=[192.168.50.11 192.168.50.11 localhost 127.0.0.1 minikube multinode-415589]
I0919 16:53:46.414692 85253 provision.go:172] copyRemoteCerts
I0919 16:53:46.414763 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0919 16:53:46.414813 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:46.417481 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.417794 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:46.417830 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.417971 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:46.418131 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.418238 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:46.418351 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
I0919 16:53:46.510528 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0919 16:53:46.510602 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0919 16:53:46.533565 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem -> /etc/docker/server.pem
I0919 16:53:46.533649 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0919 16:53:46.556587 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0919 16:53:46.556651 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0919 16:53:46.578928 85253 provision.go:86] duration metric: configureAuth took 305.966092ms
I0919 16:53:46.578952 85253 buildroot.go:189] setting minikube options for container-runtime
I0919 16:53:46.579161 85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:53:46.579191 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:46.579510 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:46.582101 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.582507 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:46.582540 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.582654 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:46.582845 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.582960 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.583146 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:46.583286 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:53:46.583592 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.11 22 <nil> <nil>}
I0919 16:53:46.583604 85253 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0919 16:53:46.715173 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0919 16:53:46.715197 85253 buildroot.go:70] root file system type: tmpfs
I0919 16:53:46.715388 85253 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0919 16:53:46.715428 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:46.718215 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.718600 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:46.718649 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.718781 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:46.718949 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.719107 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.719220 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:46.719382 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:53:46.719688 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.11 22 <nil> <nil>}
I0919 16:53:46.719756 85253 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0919 16:53:46.862632 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0919 16:53:46.862674 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:46.865253 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.865654 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:46.865686 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:46.865877 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:46.866082 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.866283 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:46.866437 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:46.866639 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:53:46.867022 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.11 22 <nil> <nil>}
I0919 16:53:46.867043 85253 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0919 16:53:47.689007 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0919 16:53:47.689047 85253 main.go:141] libmachine: Checking connection to Docker...
I0919 16:53:47.689064 85253 main.go:141] libmachine: (multinode-415589) Calling .GetURL
I0919 16:53:47.690339 85253 main.go:141] libmachine: (multinode-415589) DBG | Using libvirt version 6000000
I0919 16:53:47.692513 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.692835 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:47.692867 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.693034 85253 main.go:141] libmachine: Docker is up and running!
I0919 16:53:47.693051 85253 main.go:141] libmachine: Reticulating splines...
I0919 16:53:47.693065 85253 client.go:171] LocalClient.Create took 23.888998966s
I0919 16:53:47.693088 85253 start.go:167] duration metric: libmachine.API.Create for "multinode-415589" took 23.889070559s
I0919 16:53:47.693098 85253 start.go:300] post-start starting for "multinode-415589" (driver="kvm2")
I0919 16:53:47.693107 85253 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0919 16:53:47.693124 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:47.693386 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0919 16:53:47.693413 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:47.695565 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.695907 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:47.695940 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.696026 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:47.696190 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:47.696366 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:47.696513 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
I0919 16:53:47.791129 85253 ssh_runner.go:195] Run: cat /etc/os-release
I0919 16:53:47.795143 85253 command_runner.go:130] > NAME=Buildroot
I0919 16:53:47.795164 85253 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
I0919 16:53:47.795170 85253 command_runner.go:130] > ID=buildroot
I0919 16:53:47.795175 85253 command_runner.go:130] > VERSION_ID=2021.02.12
I0919 16:53:47.795180 85253 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0919 16:53:47.795380 85253 info.go:137] Remote host: Buildroot 2021.02.12
I0919 16:53:47.795400 85253 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
I0919 16:53:47.795465 85253 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
I0919 16:53:47.795573 85253 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
I0919 16:53:47.795587 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> /etc/ssl/certs/733972.pem
I0919 16:53:47.795697 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0919 16:53:47.803841 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
I0919 16:53:47.827423 85253 start.go:303] post-start completed in 134.313518ms
I0919 16:53:47.827470 85253 main.go:141] libmachine: (multinode-415589) Calling .GetConfigRaw
I0919 16:53:47.828046 85253 main.go:141] libmachine: (multinode-415589) Calling .GetIP
I0919 16:53:47.830771 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.831133 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:47.831167 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.831467 85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:53:47.831681 85253 start.go:128] duration metric: createHost completed in 24.044529067s
I0919 16:53:47.831712 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:47.834010 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.834358 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:47.834393 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.834504 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:47.834717 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:47.834866 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:47.834987 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:47.835153 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:53:47.835515 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.11 22 <nil> <nil>}
I0919 16:53:47.835529 85253 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0919 16:53:47.970730 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142427.940641526
I0919 16:53:47.970753 85253 fix.go:206] guest clock: 1695142427.940641526
I0919 16:53:47.970762 85253 fix.go:219] Guest: 2023-09-19 16:53:47.940641526 +0000 UTC Remote: 2023-09-19 16:53:47.831697205 +0000 UTC m=+24.148141812 (delta=108.944321ms)
I0919 16:53:47.970812 85253 fix.go:190] guest clock delta is within tolerance: 108.944321ms
I0919 16:53:47.970820 85253 start.go:83] releasing machines lock for "multinode-415589", held for 24.183793705s
I0919 16:53:47.970853 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:47.971128 85253 main.go:141] libmachine: (multinode-415589) Calling .GetIP
I0919 16:53:47.973546 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.973887 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:47.973922 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.974000 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:47.974567 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:47.974733 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:53:47.974818 85253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0919 16:53:47.974870 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:47.974956 85253 ssh_runner.go:195] Run: cat /version.json
I0919 16:53:47.974982 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:53:47.977511 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.977736 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.977996 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:47.978019 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.978169 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:47.978295 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:47.978325 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:47.978342 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:47.978506 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:53:47.978515 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:47.978696 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:53:47.978712 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
I0919 16:53:47.978870 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:53:47.979016 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
I0919 16:53:48.074567 85253 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
I0919 16:53:48.074953 85253 ssh_runner.go:195] Run: systemctl --version
I0919 16:53:48.100924 85253 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0919 16:53:48.100983 85253 command_runner.go:130] > systemd 247 (247)
I0919 16:53:48.101006 85253 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
I0919 16:53:48.101086 85253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0919 16:53:48.106790 85253 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0919 16:53:48.106848 85253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0919 16:53:48.106902 85253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0919 16:53:48.123897 85253 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0919 16:53:48.124310 85253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0919 16:53:48.124337 85253 start.go:469] detecting cgroup driver to use...
I0919 16:53:48.124477 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:53:48.140085 85253 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0919 16:53:48.140516 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0919 16:53:48.150715 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0919 16:53:48.160723 85253 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0919 16:53:48.160782 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0919 16:53:48.170725 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:53:48.180864 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0919 16:53:48.190799 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:53:48.200759 85253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0919 16:53:48.210920 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0919 16:53:48.220562 85253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0919 16:53:48.229676 85253 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0919 16:53:48.229735 85253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0919 16:53:48.238521 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:53:48.338814 85253 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0919 16:53:48.357869 85253 start.go:469] detecting cgroup driver to use...
I0919 16:53:48.357971 85253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0919 16:53:48.370639 85253 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0919 16:53:48.371471 85253 command_runner.go:130] > [Unit]
I0919 16:53:48.371490 85253 command_runner.go:130] > Description=Docker Application Container Engine
I0919 16:53:48.371504 85253 command_runner.go:130] > Documentation=https://docs.docker.com
I0919 16:53:48.371518 85253 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0919 16:53:48.371530 85253 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0919 16:53:48.371541 85253 command_runner.go:130] > StartLimitBurst=3
I0919 16:53:48.371548 85253 command_runner.go:130] > StartLimitIntervalSec=60
I0919 16:53:48.371552 85253 command_runner.go:130] > [Service]
I0919 16:53:48.371557 85253 command_runner.go:130] > Type=notify
I0919 16:53:48.371561 85253 command_runner.go:130] > Restart=on-failure
I0919 16:53:48.371571 85253 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0919 16:53:48.371581 85253 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0919 16:53:48.371589 85253 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0919 16:53:48.371601 85253 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0919 16:53:48.371615 85253 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0919 16:53:48.371629 85253 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0919 16:53:48.371645 85253 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0919 16:53:48.371657 85253 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0919 16:53:48.371666 85253 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0919 16:53:48.371673 85253 command_runner.go:130] > ExecStart=
I0919 16:53:48.371689 85253 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0919 16:53:48.371699 85253 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0919 16:53:48.371713 85253 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0919 16:53:48.371727 85253 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0919 16:53:48.371739 85253 command_runner.go:130] > LimitNOFILE=infinity
I0919 16:53:48.371749 85253 command_runner.go:130] > LimitNPROC=infinity
I0919 16:53:48.371756 85253 command_runner.go:130] > LimitCORE=infinity
I0919 16:53:48.371765 85253 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0919 16:53:48.371771 85253 command_runner.go:130] > # Only systemd 226 and above support this version.
I0919 16:53:48.371776 85253 command_runner.go:130] > TasksMax=infinity
I0919 16:53:48.371780 85253 command_runner.go:130] > TimeoutStartSec=0
I0919 16:53:48.371786 85253 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0919 16:53:48.371793 85253 command_runner.go:130] > Delegate=yes
I0919 16:53:48.371799 85253 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0919 16:53:48.371808 85253 command_runner.go:130] > KillMode=process
I0919 16:53:48.371815 85253 command_runner.go:130] > [Install]
I0919 16:53:48.371832 85253 command_runner.go:130] > WantedBy=multi-user.target
I0919 16:53:48.372071 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:53:48.383901 85253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0919 16:53:48.402079 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:53:48.413580 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:53:48.425486 85253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0919 16:53:48.451047 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:53:48.463426 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:53:48.480146 85253 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0919 16:53:48.480546 85253 ssh_runner.go:195] Run: which cri-dockerd
I0919 16:53:48.484165 85253 command_runner.go:130] > /usr/bin/cri-dockerd
I0919 16:53:48.484277 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0919 16:53:48.492192 85253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0919 16:53:48.507705 85253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0919 16:53:48.607130 85253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0919 16:53:48.719205 85253 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0919 16:53:48.719240 85253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0919 16:53:48.735474 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:53:48.837757 85253 ssh_runner.go:195] Run: sudo systemctl restart docker
I0919 16:53:50.243142 85253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405328532s)
I0919 16:53:50.243221 85253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:53:50.343223 85253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0919 16:53:50.450233 85253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:53:50.563110 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:53:50.687287 85253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0919 16:53:50.707191 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:53:50.823936 85253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0919 16:53:50.925971 85253 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0919 16:53:50.926046 85253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0919 16:53:50.933114 85253 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0919 16:53:50.933131 85253 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0919 16:53:50.933137 85253 command_runner.go:130] > Device: 16h/22d Inode: 875 Links: 1
I0919 16:53:50.933144 85253 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0919 16:53:50.933149 85253 command_runner.go:130] > Access: 2023-09-19 16:53:50.814533213 +0000
I0919 16:53:50.933154 85253 command_runner.go:130] > Modify: 2023-09-19 16:53:50.814533213 +0000
I0919 16:53:50.933159 85253 command_runner.go:130] > Change: 2023-09-19 16:53:50.817537984 +0000
I0919 16:53:50.933163 85253 command_runner.go:130] > Birth: -
I0919 16:53:50.933368 85253 start.go:537] Will wait 60s for crictl version
I0919 16:53:50.933417 85253 ssh_runner.go:195] Run: which crictl
I0919 16:53:50.938241 85253 command_runner.go:130] > /usr/bin/crictl
I0919 16:53:50.938302 85253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0919 16:53:50.994244 85253 command_runner.go:130] > Version: 0.1.0
I0919 16:53:50.994273 85253 command_runner.go:130] > RuntimeName: docker
I0919 16:53:50.994295 85253 command_runner.go:130] > RuntimeVersion: 24.0.6
I0919 16:53:50.994403 85253 command_runner.go:130] > RuntimeApiVersion: v1
I0919 16:53:50.996201 85253 start.go:553] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.6
RuntimeApiVersion: v1
I0919 16:53:50.996264 85253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0919 16:53:51.024171 85253 command_runner.go:130] > 24.0.6
I0919 16:53:51.024447 85253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0919 16:53:51.049103 85253 command_runner.go:130] > 24.0.6
I0919 16:53:51.050830 85253 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
I0919 16:53:51.050890 85253 main.go:141] libmachine: (multinode-415589) Calling .GetIP
I0919 16:53:51.054068 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:51.054408 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:53:51.054450 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:53:51.054599 85253 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0919 16:53:51.058775 85253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0919 16:53:51.071368 85253 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0919 16:53:51.071419 85253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0919 16:53:51.089091 85253 docker.go:636] Got preloaded images:
I0919 16:53:51.089111 85253 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
I0919 16:53:51.089173 85253 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0919 16:53:51.098069 85253 command_runner.go:139] > {"Repositories":{}}
I0919 16:53:51.098209 85253 ssh_runner.go:195] Run: which lz4
I0919 16:53:51.102191 85253 command_runner.go:130] > /usr/bin/lz4
I0919 16:53:51.102218 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0919 16:53:51.102289 85253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0919 16:53:51.106440 85253 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0919 16:53:51.106470 85253 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0919 16:53:51.106485 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422207204 bytes)
I0919 16:53:52.757386 85253 docker.go:600] Took 1.655115 seconds to copy over tarball
I0919 16:53:52.757462 85253 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0919 16:53:55.109451 85253 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.351953272s)
I0919 16:53:55.109484 85253 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0919 16:53:55.147873 85253 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0919 16:53:55.157240 85253 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.2":"sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c":"sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.2":"sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4":"sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.2":"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf":"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e
61df5900fa0bb0"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.2":"sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab":"sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
I0919 16:53:55.157396 85253 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
I0919 16:53:55.174287 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:53:55.282401 85253 ssh_runner.go:195] Run: sudo systemctl restart docker
I0919 16:53:58.428664 85253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.146218249s)
I0919 16:53:58.428786 85253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0919 16:53:58.453664 85253 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
I0919 16:53:58.453686 85253 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
I0919 16:53:58.453692 85253 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
I0919 16:53:58.453702 85253 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
I0919 16:53:58.453709 85253 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
I0919 16:53:58.453720 85253 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
I0919 16:53:58.453728 85253 command_runner.go:130] > registry.k8s.io/pause:3.9
I0919 16:53:58.453738 85253 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
I0919 16:53:58.453846 85253 docker.go:636] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0919 16:53:58.453870 85253 cache_images.go:84] Images are preloaded, skipping loading
I0919 16:53:58.453934 85253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0919 16:53:58.482968 85253 command_runner.go:130] > cgroupfs
I0919 16:53:58.484069 85253 cni.go:84] Creating CNI manager for ""
I0919 16:53:58.484083 85253 cni.go:136] 1 nodes found, recommending kindnet
I0919 16:53:58.484102 85253 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0919 16:53:58.484130 85253 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.11 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-415589 NodeName:multinode-415589 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0919 16:53:58.484279 85253 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.11
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-415589"
kubeletExtraArgs:
node-ip: 192.168.50.11
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.11"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0919 16:53:58.484375 85253 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-415589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.11
[Install]
config:
{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0919 16:53:58.484440 85253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
I0919 16:53:58.494650 85253 command_runner.go:130] > kubeadm
I0919 16:53:58.494675 85253 command_runner.go:130] > kubectl
I0919 16:53:58.494681 85253 command_runner.go:130] > kubelet
I0919 16:53:58.494708 85253 binaries.go:44] Found k8s binaries, skipping transfer
I0919 16:53:58.494792 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0919 16:53:58.504326 85253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
I0919 16:53:58.520389 85253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0919 16:53:58.535724 85253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
I0919 16:53:58.551227 85253 ssh_runner.go:195] Run: grep 192.168.50.11 control-plane.minikube.internal$ /etc/hosts
I0919 16:53:58.554818 85253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.11 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0919 16:53:58.565786 85253 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589 for IP: 192.168.50.11
I0919 16:53:58.565811 85253 certs.go:190] acquiring lock for shared ca certs: {Name:mkf975c4ed215d047afb89379d3c517cec3820b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:53:58.566310 85253 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key
I0919 16:53:58.566464 85253 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key
I0919 16:53:58.566554 85253 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key
I0919 16:53:58.566598 85253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt with IP's: []
I0919 16:53:58.622220 85253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt ...
I0919 16:53:58.622252 85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt: {Name:mk7ec29a810283c598a22f6552f2c706bdcbda66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:53:58.622443 85253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key ...
I0919 16:53:58.622457 85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key: {Name:mk0d34c3af68693664488a90c719b9e5e36f6ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:53:58.622561 85253 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key.249cd0a6
I0919 16:53:58.622579 85253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt.249cd0a6 with IP's: [192.168.50.11 10.96.0.1 127.0.0.1 10.0.0.1]
I0919 16:53:58.831877 85253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt.249cd0a6 ...
I0919 16:53:58.831910 85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt.249cd0a6: {Name:mkb2c2ec3feeb95a530c3f5c703f0b1be4b37155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:53:58.832092 85253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key.249cd0a6 ...
I0919 16:53:58.832108 85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key.249cd0a6: {Name:mkef0c4bc7ead672418d86c55797aef46d113dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:53:58.832205 85253 certs.go:337] copying /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt.249cd0a6 -> /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt
I0919 16:53:58.832301 85253 certs.go:341] copying /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key.249cd0a6 -> /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key
I0919 16:53:58.832373 85253 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key
I0919 16:53:58.832394 85253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt with IP's: []
I0919 16:53:58.924169 85253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt ...
I0919 16:53:58.924199 85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt: {Name:mk3637c165ef46259ddb4842eba5fdcf9d5a67da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:53:58.924381 85253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key ...
I0919 16:53:58.924396 85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key: {Name:mkffc46297afcf14a50f349f0971a70fbc1459c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:53:58.924495 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0919 16:53:58.924518 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0919 16:53:58.924534 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0919 16:53:58.924550 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0919 16:53:58.924570 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0919 16:53:58.924589 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0919 16:53:58.924608 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0919 16:53:58.924628 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0919 16:53:58.924701 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem (1338 bytes)
W0919 16:53:58.924750 85253 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397_empty.pem, impossibly tiny 0 bytes
I0919 16:53:58.924768 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem (1679 bytes)
I0919 16:53:58.924805 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem (1078 bytes)
I0919 16:53:58.924839 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem (1123 bytes)
I0919 16:53:58.924876 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem (1675 bytes)
I0919 16:53:58.924932 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem (1708 bytes)
I0919 16:53:58.924972 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> /usr/share/ca-certificates/733972.pem
I0919 16:53:58.924995 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0919 16:53:58.925013 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem -> /usr/share/ca-certificates/73397.pem
I0919 16:53:58.925530 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0919 16:53:58.949383 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0919 16:53:58.971267 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0919 16:53:58.992601 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0919 16:53:59.014627 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0919 16:53:59.036680 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0919 16:53:59.059344 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0919 16:53:59.080960 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0919 16:53:59.102635 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /usr/share/ca-certificates/733972.pem (1708 bytes)
I0919 16:53:59.124135 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0919 16:53:59.145432 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem --> /usr/share/ca-certificates/73397.pem (1338 bytes)
I0919 16:53:59.166779 85253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0919 16:53:59.182036 85253 ssh_runner.go:195] Run: openssl version
I0919 16:53:59.187213 85253 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0919 16:53:59.187445 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0919 16:53:59.197576 85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0919 16:53:59.202026 85253 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
I0919 16:53:59.202049 85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
I0919 16:53:59.202092 85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0919 16:53:59.207183 85253 command_runner.go:130] > b5213941
I0919 16:53:59.207418 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0919 16:53:59.217446 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73397.pem && ln -fs /usr/share/ca-certificates/73397.pem /etc/ssl/certs/73397.pem"
I0919 16:53:59.227431 85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73397.pem
I0919 16:53:59.231902 85253 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
I0919 16:53:59.231925 85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
I0919 16:53:59.231961 85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73397.pem
I0919 16:53:59.237159 85253 command_runner.go:130] > 51391683
I0919 16:53:59.237232 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73397.pem /etc/ssl/certs/51391683.0"
I0919 16:53:59.247187 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/733972.pem && ln -fs /usr/share/ca-certificates/733972.pem /etc/ssl/certs/733972.pem"
I0919 16:53:59.257230 85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/733972.pem
I0919 16:53:59.261733 85253 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
I0919 16:53:59.261816 85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
I0919 16:53:59.261862 85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/733972.pem
I0919 16:53:59.266903 85253 command_runner.go:130] > 3ec20f2e
I0919 16:53:59.267073 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/733972.pem /etc/ssl/certs/3ec20f2e.0"
I0919 16:53:59.277487 85253 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0919 16:53:59.281460 85253 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0919 16:53:59.281795 85253 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0919 16:53:59.281847 85253 kubeadm.go:404] StartCluster: {Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0919 16:53:59.281980 85253 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0919 16:53:59.301062 85253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0919 16:53:59.310950 85253 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
I0919 16:53:59.310980 85253 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
I0919 16:53:59.310990 85253 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
I0919 16:53:59.311059 85253 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0919 16:53:59.320120 85253 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0919 16:53:59.329254 85253 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I0919 16:53:59.329281 85253 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I0919 16:53:59.329291 85253 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I0919 16:53:59.329303 85253 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0919 16:53:59.329343 85253 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0919 16:53:59.329379 85253 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0919 16:53:59.669291 85253 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0919 16:53:59.669329 85253 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0919 16:54:11.198810 85253 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
I0919 16:54:11.198853 85253 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
I0919 16:54:11.198911 85253 kubeadm.go:322] [preflight] Running pre-flight checks
I0919 16:54:11.198922 85253 command_runner.go:130] > [preflight] Running pre-flight checks
I0919 16:54:11.199013 85253 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0919 16:54:11.199020 85253 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
I0919 16:54:11.199112 85253 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0919 16:54:11.199122 85253 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
I0919 16:54:11.199219 85253 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0919 16:54:11.199239 85253 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0919 16:54:11.199335 85253 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0919 16:54:11.200988 85253 out.go:204] - Generating certificates and keys ...
I0919 16:54:11.199379 85253 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0919 16:54:11.201086 85253 kubeadm.go:322] [certs] Using existing ca certificate authority
I0919 16:54:11.201102 85253 command_runner.go:130] > [certs] Using existing ca certificate authority
I0919 16:54:11.201176 85253 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0919 16:54:11.201188 85253 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
I0919 16:54:11.201265 85253 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0919 16:54:11.201285 85253 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
I0919 16:54:11.201366 85253 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0919 16:54:11.201378 85253 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
I0919 16:54:11.201455 85253 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0919 16:54:11.201466 85253 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
I0919 16:54:11.201663 85253 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0919 16:54:11.201685 85253 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
I0919 16:54:11.201762 85253 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0919 16:54:11.201776 85253 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
I0919 16:54:11.201957 85253 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-415589] and IPs [192.168.50.11 127.0.0.1 ::1]
I0919 16:54:11.201969 85253 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-415589] and IPs [192.168.50.11 127.0.0.1 ::1]
I0919 16:54:11.202051 85253 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0919 16:54:11.202070 85253 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
I0919 16:54:11.202171 85253 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-415589] and IPs [192.168.50.11 127.0.0.1 ::1]
I0919 16:54:11.202185 85253 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-415589] and IPs [192.168.50.11 127.0.0.1 ::1]
I0919 16:54:11.202249 85253 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0919 16:54:11.202260 85253 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
I0919 16:54:11.202315 85253 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0919 16:54:11.202326 85253 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
I0919 16:54:11.202394 85253 kubeadm.go:322] [certs] Generating "sa" key and public key
I0919 16:54:11.202403 85253 command_runner.go:130] > [certs] Generating "sa" key and public key
I0919 16:54:11.202450 85253 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0919 16:54:11.202471 85253 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0919 16:54:11.202549 85253 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0919 16:54:11.202564 85253 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
I0919 16:54:11.202623 85253 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0919 16:54:11.202634 85253 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0919 16:54:11.202729 85253 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0919 16:54:11.202741 85253 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0919 16:54:11.202820 85253 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0919 16:54:11.202833 85253 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0919 16:54:11.202928 85253 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0919 16:54:11.202937 85253 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0919 16:54:11.202990 85253 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0919 16:54:11.203005 85253 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0919 16:54:11.205690 85253 out.go:204] - Booting up control plane ...
I0919 16:54:11.205795 85253 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I0919 16:54:11.205806 85253 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0919 16:54:11.205900 85253 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0919 16:54:11.205908 85253 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0919 16:54:11.206014 85253 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I0919 16:54:11.206037 85253 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0919 16:54:11.206186 85253 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0919 16:54:11.206200 85253 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0919 16:54:11.206303 85253 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0919 16:54:11.206315 85253 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0919 16:54:11.206371 85253 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0919 16:54:11.206383 85253 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0919 16:54:11.206577 85253 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0919 16:54:11.206588 85253 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0919 16:54:11.206692 85253 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.505224 seconds
I0919 16:54:11.206702 85253 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505224 seconds
I0919 16:54:11.206841 85253 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0919 16:54:11.206856 85253 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0919 16:54:11.207005 85253 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0919 16:54:11.207015 85253 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0919 16:54:11.207089 85253 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
I0919 16:54:11.207100 85253 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0919 16:54:11.207331 85253 command_runner.go:130] > [mark-control-plane] Marking the node multinode-415589 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0919 16:54:11.207342 85253 kubeadm.go:322] [mark-control-plane] Marking the node multinode-415589 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0919 16:54:11.207399 85253 command_runner.go:130] > [bootstrap-token] Using token: a9n71v.a9970pz0xsqn3fiz
I0919 16:54:11.207409 85253 kubeadm.go:322] [bootstrap-token] Using token: a9n71v.a9970pz0xsqn3fiz
I0919 16:54:11.209021 85253 out.go:204] - Configuring RBAC rules ...
I0919 16:54:11.209273 85253 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0919 16:54:11.209292 85253 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0919 16:54:11.209358 85253 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0919 16:54:11.209368 85253 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0919 16:54:11.209529 85253 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0919 16:54:11.209540 85253 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0919 16:54:11.209676 85253 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0919 16:54:11.209696 85253 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0919 16:54:11.209865 85253 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0919 16:54:11.209880 85253 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0919 16:54:11.209998 85253 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0919 16:54:11.210008 85253 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0919 16:54:11.210175 85253 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0919 16:54:11.210179 85253 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0919 16:54:11.210255 85253 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
I0919 16:54:11.210259 85253 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0919 16:54:11.210303 85253 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
I0919 16:54:11.210307 85253 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0919 16:54:11.210310 85253 kubeadm.go:322]
I0919 16:54:11.210372 85253 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
I0919 16:54:11.210381 85253 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0919 16:54:11.210392 85253 kubeadm.go:322]
I0919 16:54:11.210495 85253 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
I0919 16:54:11.210506 85253 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0919 16:54:11.210513 85253 kubeadm.go:322]
I0919 16:54:11.210551 85253 command_runner.go:130] > mkdir -p $HOME/.kube
I0919 16:54:11.210559 85253 kubeadm.go:322] mkdir -p $HOME/.kube
I0919 16:54:11.210637 85253 command_runner.go:130] > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0919 16:54:11.210649 85253 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0919 16:54:11.210768 85253 command_runner.go:130] > sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0919 16:54:11.210783 85253 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0919 16:54:11.210794 85253 kubeadm.go:322]
I0919 16:54:11.210871 85253 command_runner.go:130] > Alternatively, if you are the root user, you can run:
I0919 16:54:11.210887 85253 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0919 16:54:11.210891 85253 kubeadm.go:322]
I0919 16:54:11.210957 85253 command_runner.go:130] > export KUBECONFIG=/etc/kubernetes/admin.conf
I0919 16:54:11.210972 85253 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0919 16:54:11.210987 85253 kubeadm.go:322]
I0919 16:54:11.211064 85253 command_runner.go:130] > You should now deploy a pod network to the cluster.
I0919 16:54:11.211071 85253 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0919 16:54:11.211158 85253 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0919 16:54:11.211166 85253 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0919 16:54:11.211252 85253 command_runner.go:130] > https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0919 16:54:11.211263 85253 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0919 16:54:11.211269 85253 kubeadm.go:322]
I0919 16:54:11.211388 85253 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
I0919 16:54:11.211400 85253 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0919 16:54:11.211503 85253 command_runner.go:130] > and service account keys on each node and then running the following as root:
I0919 16:54:11.211514 85253 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0919 16:54:11.211520 85253 kubeadm.go:322]
I0919 16:54:11.211643 85253 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a9n71v.a9970pz0xsqn3fiz \
I0919 16:54:11.211652 85253 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a9n71v.a9970pz0xsqn3fiz \
I0919 16:54:11.211801 85253 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 \
I0919 16:54:11.211812 85253 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 \
I0919 16:54:11.211840 85253 command_runner.go:130] > --control-plane
I0919 16:54:11.211849 85253 kubeadm.go:322] --control-plane
I0919 16:54:11.211855 85253 kubeadm.go:322]
I0919 16:54:11.211963 85253 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
I0919 16:54:11.211973 85253 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0919 16:54:11.211989 85253 kubeadm.go:322]
I0919 16:54:11.212097 85253 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a9n71v.a9970pz0xsqn3fiz \
I0919 16:54:11.212109 85253 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a9n71v.a9970pz0xsqn3fiz \
I0919 16:54:11.212234 85253 command_runner.go:130] > --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510
I0919 16:54:11.212252 85253 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510
I0919 16:54:11.212260 85253 cni.go:84] Creating CNI manager for ""
I0919 16:54:11.212268 85253 cni.go:136] 1 nodes found, recommending kindnet
I0919 16:54:11.213988 85253 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0919 16:54:11.215465 85253 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0919 16:54:11.221692 85253 command_runner.go:130] > File: /opt/cni/bin/portmap
I0919 16:54:11.221716 85253 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I0919 16:54:11.221725 85253 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I0919 16:54:11.221733 85253 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0919 16:54:11.221743 85253 command_runner.go:130] > Access: 2023-09-19 16:53:37.309210321 +0000
I0919 16:54:11.221755 85253 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
I0919 16:54:11.221764 85253 command_runner.go:130] > Change: 2023-09-19 16:53:35.557210321 +0000
I0919 16:54:11.221771 85253 command_runner.go:130] > Birth: -
I0919 16:54:11.221879 85253 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
I0919 16:54:11.221898 85253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0919 16:54:11.253438 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0919 16:54:12.404972 85253 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
I0919 16:54:12.411309 85253 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
I0919 16:54:12.420524 85253 command_runner.go:130] > serviceaccount/kindnet created
I0919 16:54:12.433168 85253 command_runner.go:130] > daemonset.apps/kindnet created
I0919 16:54:12.435928 85253 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.182450679s)
I0919 16:54:12.435978 85253 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0919 16:54:12.436080 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:12.436096 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=multinode-415589 minikube.k8s.io/updated_at=2023_09_19T16_54_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:12.705419 85253 command_runner.go:130] > node/multinode-415589 labeled
I0919 16:54:12.707175 85253 command_runner.go:130] > -16
I0919 16:54:12.707206 85253 ops.go:34] apiserver oom_adj: -16
I0919 16:54:12.707258 85253 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
I0919 16:54:12.707383 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:12.825175 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:12.827013 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:12.922384 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:13.424936 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:13.535895 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:13.924999 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:14.017877 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:14.424639 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:14.527560 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:14.925174 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:15.026461 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:15.425070 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:15.521662 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:15.925003 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:16.045288 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:16.424619 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:16.527388 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:16.924621 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:17.030516 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:17.424906 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:17.570848 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:17.925249 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:18.020664 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:18.424314 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:18.530310 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:18.924834 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:19.011504 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:19.424471 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:19.525979 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:19.925156 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:20.012117 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:20.424655 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:20.513423 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:20.924643 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:21.029491 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:21.424663 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:21.523197 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:21.924780 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:22.031995 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:22.424538 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:22.563143 85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
I0919 16:54:22.924656 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0919 16:54:23.050221 85253 command_runner.go:130] > NAME SECRETS AGE
I0919 16:54:23.050251 85253 command_runner.go:130] > default 0 1s
I0919 16:54:23.050283 85253 kubeadm.go:1081] duration metric: took 10.614274284s to wait for elevateKubeSystemPrivileges.
I0919 16:54:23.050300 85253 kubeadm.go:406] StartCluster complete in 23.768456629s
I0919 16:54:23.050322 85253 settings.go:142] acquiring lock: {Name:mk5b0472b3a6dd507de44affe9807f6a73f90c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:54:23.050401 85253 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17240-65689/kubeconfig
I0919 16:54:23.051523 85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:54:23.052392 85253 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17240-65689/kubeconfig
I0919 16:54:23.052569 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0919 16:54:23.052798 85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:54:23.052462 85253 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0919 16:54:23.052866 85253 addons.go:69] Setting storage-provisioner=true in profile "multinode-415589"
I0919 16:54:23.052886 85253 addons.go:231] Setting addon storage-provisioner=true in "multinode-415589"
I0919 16:54:23.052886 85253 addons.go:69] Setting default-storageclass=true in profile "multinode-415589"
I0919 16:54:23.052910 85253 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-415589"
I0919 16:54:23.052968 85253 host.go:66] Checking if "multinode-415589" exists ...
I0919 16:54:23.052950 85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0919 16:54:23.053721 85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:54:23.053724 85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:54:23.053762 85253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:54:23.053782 85253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:54:23.054074 85253 cert_rotation.go:137] Starting client certificate rotation controller
I0919 16:54:23.054475 85253 round_trippers.go:463] GET https://192.168.50.11:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0919 16:54:23.054493 85253 round_trippers.go:469] Request Headers:
I0919 16:54:23.054505 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:23.054514 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:23.066452 85253 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0919 16:54:23.066476 85253 round_trippers.go:577] Response Headers:
I0919 16:54:23.066486 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:23 GMT
I0919 16:54:23.066495 85253 round_trippers.go:580] Audit-Id: 3fb3b891-e57d-408a-a790-659aa608d8f8
I0919 16:54:23.066503 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:23.066518 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:23.066530 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:23.066538 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:23.066549 85253 round_trippers.go:580] Content-Length: 291
I0919 16:54:23.066582 85253 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"233","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
I0919 16:54:23.067102 85253 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"233","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
I0919 16:54:23.067176 85253 round_trippers.go:463] PUT https://192.168.50.11:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0919 16:54:23.067191 85253 round_trippers.go:469] Request Headers:
I0919 16:54:23.067201 85253 round_trippers.go:473] Content-Type: application/json
I0919 16:54:23.067210 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:23.067225 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:23.069176 85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
I0919 16:54:23.069496 85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
I0919 16:54:23.069788 85253 main.go:141] libmachine: () Calling .GetVersion
I0919 16:54:23.069991 85253 main.go:141] libmachine: () Calling .GetVersion
I0919 16:54:23.070303 85253 main.go:141] libmachine: Using API Version 1
I0919 16:54:23.070326 85253 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:54:23.070589 85253 main.go:141] libmachine: Using API Version 1
I0919 16:54:23.070612 85253 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:54:23.070663 85253 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:54:23.070874 85253 main.go:141] libmachine: (multinode-415589) Calling .GetState
I0919 16:54:23.070964 85253 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:54:23.071548 85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:54:23.071596 85253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:54:23.073129 85253 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17240-65689/kubeconfig
I0919 16:54:23.073391 85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0919 16:54:23.073711 85253 round_trippers.go:463] GET https://192.168.50.11:8443/apis/storage.k8s.io/v1/storageclasses
I0919 16:54:23.073723 85253 round_trippers.go:469] Request Headers:
I0919 16:54:23.073740 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:23.073751 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:23.081718 85253 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
I0919 16:54:23.081742 85253 round_trippers.go:577] Response Headers:
I0919 16:54:23.081754 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:23.081761 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:23.081770 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:23.081778 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:23.081785 85253 round_trippers.go:580] Content-Length: 291
I0919 16:54:23.081795 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:23 GMT
I0919 16:54:23.081803 85253 round_trippers.go:580] Audit-Id: a47bac4e-528a-45e9-bf5b-1a488ee61e83
I0919 16:54:23.082138 85253 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"314","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
I0919 16:54:23.082271 85253 round_trippers.go:463] GET https://192.168.50.11:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0919 16:54:23.082296 85253 round_trippers.go:469] Request Headers:
I0919 16:54:23.082308 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:23.082322 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:23.082630 85253 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0919 16:54:23.082649 85253 round_trippers.go:577] Response Headers:
I0919 16:54:23.082658 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:23.082667 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:23.082677 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:23.082689 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:23.082697 85253 round_trippers.go:580] Content-Length: 109
I0919 16:54:23.082714 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:23 GMT
I0919 16:54:23.082726 85253 round_trippers.go:580] Audit-Id: b7b49ea7-8771-4c2f-86c9-b5dbf1df04f7
I0919 16:54:23.082748 85253 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"314"},"items":[]}
I0919 16:54:23.083030 85253 addons.go:231] Setting addon default-storageclass=true in "multinode-415589"
I0919 16:54:23.083074 85253 host.go:66] Checking if "multinode-415589" exists ...
I0919 16:54:23.083444 85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:54:23.083489 85253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:54:23.084530 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:23.084559 85253 round_trippers.go:577] Response Headers:
I0919 16:54:23.084570 85253 round_trippers.go:580] Content-Length: 291
I0919 16:54:23.084585 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:23 GMT
I0919 16:54:23.084602 85253 round_trippers.go:580] Audit-Id: ef73916a-8c8f-4942-90e0-b4b220c44e45
I0919 16:54:23.084611 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:23.084622 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:23.084630 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:23.084641 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:23.084667 85253 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"314","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
I0919 16:54:23.084764 85253 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-415589" context rescaled to 1 replicas
I0919 16:54:23.084797 85253 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0919 16:54:23.087739 85253 out.go:177] * Verifying Kubernetes components...
I0919 16:54:23.089185 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0919 16:54:23.087325 85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35911
I0919 16:54:23.089664 85253 main.go:141] libmachine: () Calling .GetVersion
I0919 16:54:23.090200 85253 main.go:141] libmachine: Using API Version 1
I0919 16:54:23.090229 85253 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:54:23.090630 85253 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:54:23.090855 85253 main.go:141] libmachine: (multinode-415589) Calling .GetState
I0919 16:54:23.092613 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:54:23.095622 85253 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0919 16:54:23.097249 85253 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0919 16:54:23.097268 85253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0919 16:54:23.097289 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:54:23.099014 85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
I0919 16:54:23.099485 85253 main.go:141] libmachine: () Calling .GetVersion
I0919 16:54:23.099942 85253 main.go:141] libmachine: Using API Version 1
I0919 16:54:23.099962 85253 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:54:23.100309 85253 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:54:23.100338 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:54:23.100817 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:54:23.100833 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:54:23.100908 85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:54:23.100959 85253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:54:23.100990 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:54:23.101140 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:54:23.101272 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:54:23.101518 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
I0919 16:54:23.115209 85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
I0919 16:54:23.115596 85253 main.go:141] libmachine: () Calling .GetVersion
I0919 16:54:23.116057 85253 main.go:141] libmachine: Using API Version 1
I0919 16:54:23.116087 85253 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:54:23.116464 85253 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:54:23.116680 85253 main.go:141] libmachine: (multinode-415589) Calling .GetState
I0919 16:54:23.118234 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:54:23.118471 85253 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0919 16:54:23.118486 85253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0919 16:54:23.118504 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:54:23.121818 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:54:23.122267 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:54:23.122288 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:54:23.122446 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:54:23.122608 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:54:23.122799 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:54:23.122957 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
I0919 16:54:23.344565 85253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0919 16:54:23.436992 85253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0919 16:54:23.571337 85253 command_runner.go:130] > apiVersion: v1
I0919 16:54:23.571358 85253 command_runner.go:130] > data:
I0919 16:54:23.571362 85253 command_runner.go:130] > Corefile: |
I0919 16:54:23.571366 85253 command_runner.go:130] > .:53 {
I0919 16:54:23.571370 85253 command_runner.go:130] > errors
I0919 16:54:23.571375 85253 command_runner.go:130] > health {
I0919 16:54:23.571380 85253 command_runner.go:130] > lameduck 5s
I0919 16:54:23.571384 85253 command_runner.go:130] > }
I0919 16:54:23.571389 85253 command_runner.go:130] > ready
I0919 16:54:23.571399 85253 command_runner.go:130] > kubernetes cluster.local in-addr.arpa ip6.arpa {
I0919 16:54:23.571406 85253 command_runner.go:130] > pods insecure
I0919 16:54:23.571420 85253 command_runner.go:130] > fallthrough in-addr.arpa ip6.arpa
I0919 16:54:23.571435 85253 command_runner.go:130] > ttl 30
I0919 16:54:23.571442 85253 command_runner.go:130] > }
I0919 16:54:23.571450 85253 command_runner.go:130] > prometheus :9153
I0919 16:54:23.571462 85253 command_runner.go:130] > forward . /etc/resolv.conf {
I0919 16:54:23.571470 85253 command_runner.go:130] > max_concurrent 1000
I0919 16:54:23.571477 85253 command_runner.go:130] > }
I0919 16:54:23.571481 85253 command_runner.go:130] > cache 30
I0919 16:54:23.571485 85253 command_runner.go:130] > loop
I0919 16:54:23.571492 85253 command_runner.go:130] > reload
I0919 16:54:23.571501 85253 command_runner.go:130] > loadbalance
I0919 16:54:23.571511 85253 command_runner.go:130] > }
I0919 16:54:23.571518 85253 command_runner.go:130] > kind: ConfigMap
I0919 16:54:23.571528 85253 command_runner.go:130] > metadata:
I0919 16:54:23.571544 85253 command_runner.go:130] > creationTimestamp: "2023-09-19T16:54:11Z"
I0919 16:54:23.571554 85253 command_runner.go:130] > name: coredns
I0919 16:54:23.571565 85253 command_runner.go:130] > namespace: kube-system
I0919 16:54:23.571575 85253 command_runner.go:130] > resourceVersion: "229"
I0919 16:54:23.571587 85253 command_runner.go:130] > uid: 0111cf12-53fa-4f83-8267-d0f1ad7aadd6
I0919 16:54:23.571766 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0919 16:54:23.572139 85253 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17240-65689/kubeconfig
I0919 16:54:23.572463 85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0919 16:54:23.572798 85253 node_ready.go:35] waiting up to 6m0s for node "multinode-415589" to be "Ready" ...
I0919 16:54:23.572883 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:23.572893 85253 round_trippers.go:469] Request Headers:
I0919 16:54:23.572906 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:23.572924 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:23.575029 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:23.575045 85253 round_trippers.go:577] Response Headers:
I0919 16:54:23.575051 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:23.575057 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:23 GMT
I0919 16:54:23.575062 85253 round_trippers.go:580] Audit-Id: d0ebbafb-fef0-46cc-88e7-24e96c02d631
I0919 16:54:23.575067 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:23.575072 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:23.575077 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:23.575284 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:23.575837 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:23.575851 85253 round_trippers.go:469] Request Headers:
I0919 16:54:23.575857 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:23.575864 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:23.578050 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:23.578065 85253 round_trippers.go:577] Response Headers:
I0919 16:54:23.578075 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:23.578084 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:23.578093 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:23.578102 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:23.578113 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:23 GMT
I0919 16:54:23.578124 85253 round_trippers.go:580] Audit-Id: 2b73ffab-9777-40c9-8846-a50585810aa1
I0919 16:54:23.578320 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:24.078944 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:24.078969 85253 round_trippers.go:469] Request Headers:
I0919 16:54:24.078979 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:24.078990 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:24.083444 85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0919 16:54:24.083466 85253 round_trippers.go:577] Response Headers:
I0919 16:54:24.083473 85253 round_trippers.go:580] Audit-Id: af937ad5-0e36-48c3-a08a-c957bcca19ab
I0919 16:54:24.083481 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:24.083490 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:24.083499 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:24.083507 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:24.083520 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:24 GMT
I0919 16:54:24.083661 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:24.579659 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:24.579685 85253 round_trippers.go:469] Request Headers:
I0919 16:54:24.579697 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:24.579708 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:24.581929 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:24.581952 85253 round_trippers.go:577] Response Headers:
I0919 16:54:24.581963 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:24.581972 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:24 GMT
I0919 16:54:24.581994 85253 round_trippers.go:580] Audit-Id: fb770a18-7e10-4f64-895c-21eaac304460
I0919 16:54:24.582006 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:24.582015 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:24.582025 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:24.582294 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:24.807796 85253 command_runner.go:130] > serviceaccount/storage-provisioner created
I0919 16:54:24.817012 85253 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
I0919 16:54:24.832410 85253 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0919 16:54:24.847786 85253 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
I0919 16:54:24.866141 85253 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
I0919 16:54:24.888064 85253 command_runner.go:130] > pod/storage-provisioner created
I0919 16:54:24.903790 85253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.559180744s)
I0919 16:54:24.903840 85253 command_runner.go:130] > storageclass.storage.k8s.io/standard created
I0919 16:54:24.903863 85253 main.go:141] libmachine: Making call to close driver server
I0919 16:54:24.903867 85253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.466846375s)
I0919 16:54:24.903881 85253 main.go:141] libmachine: (multinode-415589) Calling .Close
I0919 16:54:24.903901 85253 main.go:141] libmachine: Making call to close driver server
I0919 16:54:24.903924 85253 main.go:141] libmachine: (multinode-415589) Calling .Close
I0919 16:54:24.903907 85253 command_runner.go:130] > configmap/coredns replaced
I0919 16:54:24.904044 85253 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.332247153s)
I0919 16:54:24.904071 85253 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
I0919 16:54:24.904209 85253 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:54:24.904232 85253 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:54:24.904243 85253 main.go:141] libmachine: Making call to close driver server
I0919 16:54:24.904243 85253 main.go:141] libmachine: (multinode-415589) DBG | Closing plugin on server side
I0919 16:54:24.904252 85253 main.go:141] libmachine: (multinode-415589) Calling .Close
I0919 16:54:24.904217 85253 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:54:24.904288 85253 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:54:24.904308 85253 main.go:141] libmachine: Making call to close driver server
I0919 16:54:24.904324 85253 main.go:141] libmachine: (multinode-415589) Calling .Close
I0919 16:54:24.904486 85253 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:54:24.904527 85253 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:54:24.904623 85253 main.go:141] libmachine: (multinode-415589) DBG | Closing plugin on server side
I0919 16:54:24.904701 85253 main.go:141] libmachine: Making call to close driver server
I0919 16:54:24.904719 85253 main.go:141] libmachine: (multinode-415589) Calling .Close
I0919 16:54:24.904950 85253 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:54:24.904965 85253 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:54:24.906096 85253 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:54:24.906125 85253 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:54:24.908086 85253 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0919 16:54:24.909584 85253 addons.go:502] enable addons completed in 1.857135481s: enabled=[default-storageclass storage-provisioner]
I0919 16:54:25.079433 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:25.079457 85253 round_trippers.go:469] Request Headers:
I0919 16:54:25.079465 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:25.079471 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:25.082174 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:25.082198 85253 round_trippers.go:577] Response Headers:
I0919 16:54:25.082206 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:25.082211 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:25.082217 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:25.082222 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:25 GMT
I0919 16:54:25.082227 85253 round_trippers.go:580] Audit-Id: eaec7cd0-19fb-4851-abab-2853e5170772
I0919 16:54:25.082232 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:25.082617 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:25.579306 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:25.579336 85253 round_trippers.go:469] Request Headers:
I0919 16:54:25.579352 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:25.579361 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:25.581778 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:25.581798 85253 round_trippers.go:577] Response Headers:
I0919 16:54:25.581805 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:25.581810 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:25 GMT
I0919 16:54:25.581816 85253 round_trippers.go:580] Audit-Id: 54a24287-7988-4d7e-b14b-59143dbfae20
I0919 16:54:25.581821 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:25.581826 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:25.581831 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:25.582219 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:25.582555 85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
I0919 16:54:26.078882 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:26.078912 85253 round_trippers.go:469] Request Headers:
I0919 16:54:26.078921 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:26.078933 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:26.082036 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:54:26.082060 85253 round_trippers.go:577] Response Headers:
I0919 16:54:26.082071 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:26 GMT
I0919 16:54:26.082080 85253 round_trippers.go:580] Audit-Id: 31ae632d-0073-4887-a5d8-c9dc78573675
I0919 16:54:26.082088 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:26.082094 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:26.082099 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:26.082104 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:26.082233 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:26.578863 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:26.578885 85253 round_trippers.go:469] Request Headers:
I0919 16:54:26.578893 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:26.578899 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:26.581484 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:26.581507 85253 round_trippers.go:577] Response Headers:
I0919 16:54:26.581518 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:26.581527 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:26.581536 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:26 GMT
I0919 16:54:26.581542 85253 round_trippers.go:580] Audit-Id: b16e8fbf-c2f2-4a76-a249-094835df55bf
I0919 16:54:26.581547 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:26.581553 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:26.582036 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:27.079526 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:27.079550 85253 round_trippers.go:469] Request Headers:
I0919 16:54:27.079558 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:27.079564 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:27.082184 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:27.082205 85253 round_trippers.go:577] Response Headers:
I0919 16:54:27.082212 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:27.082217 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:27.082222 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:27.082228 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:27.082233 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:27 GMT
I0919 16:54:27.082238 85253 round_trippers.go:580] Audit-Id: 408ad81e-c299-4cc7-821e-f8635627a2e7
I0919 16:54:27.082414 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:27.579079 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:27.579103 85253 round_trippers.go:469] Request Headers:
I0919 16:54:27.579118 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:27.579131 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:27.582001 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:27.582025 85253 round_trippers.go:577] Response Headers:
I0919 16:54:27.582033 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:27 GMT
I0919 16:54:27.582038 85253 round_trippers.go:580] Audit-Id: 7d0c435f-6907-48ae-b542-602f753109db
I0919 16:54:27.582044 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:27.582049 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:27.582057 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:27.582062 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:27.582219 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:28.079021 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:28.079041 85253 round_trippers.go:469] Request Headers:
I0919 16:54:28.079056 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:28.079062 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:28.081690 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:28.081710 85253 round_trippers.go:577] Response Headers:
I0919 16:54:28.081720 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:28 GMT
I0919 16:54:28.081728 85253 round_trippers.go:580] Audit-Id: 1188f209-5793-4129-903c-7b1a39b3808a
I0919 16:54:28.081737 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:28.081745 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:28.081758 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:28.081771 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:28.082330 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:28.082807 85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
I0919 16:54:28.579510 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:28.579535 85253 round_trippers.go:469] Request Headers:
I0919 16:54:28.579547 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:28.579556 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:28.582180 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:28.582205 85253 round_trippers.go:577] Response Headers:
I0919 16:54:28.582215 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:28.582222 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:28.582227 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:28.582234 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:28.582242 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:28 GMT
I0919 16:54:28.582249 85253 round_trippers.go:580] Audit-Id: 377c82ef-a6ca-43c3-99ab-7cf8c1e74e23
I0919 16:54:28.582813 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:29.079114 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:29.079140 85253 round_trippers.go:469] Request Headers:
I0919 16:54:29.079149 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:29.079156 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:29.082314 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:54:29.082343 85253 round_trippers.go:577] Response Headers:
I0919 16:54:29.082355 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:29.082365 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:29.082373 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:29 GMT
I0919 16:54:29.082379 85253 round_trippers.go:580] Audit-Id: 52b454b1-f632-4de3-afb5-6d60a8ce9a48
I0919 16:54:29.082384 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:29.082390 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:29.082684 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:29.579312 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:29.579334 85253 round_trippers.go:469] Request Headers:
I0919 16:54:29.579342 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:29.579347 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:29.581990 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:29.582009 85253 round_trippers.go:577] Response Headers:
I0919 16:54:29.582016 85253 round_trippers.go:580] Audit-Id: d6c5ab61-5407-458d-b823-0cfa3d6c387b
I0919 16:54:29.582022 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:29.582028 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:29.582036 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:29.582045 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:29.582054 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:29 GMT
I0919 16:54:29.582464 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:30.079091 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:30.079116 85253 round_trippers.go:469] Request Headers:
I0919 16:54:30.079125 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:30.079132 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:30.082136 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:30.082158 85253 round_trippers.go:577] Response Headers:
I0919 16:54:30.082166 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:30 GMT
I0919 16:54:30.082177 85253 round_trippers.go:580] Audit-Id: 8be1cd1b-8f98-442d-9c93-5eb12bb43a1a
I0919 16:54:30.082182 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:30.082188 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:30.082193 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:30.082198 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:30.082632 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:30.082925 85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
I0919 16:54:30.579333 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:30.579360 85253 round_trippers.go:469] Request Headers:
I0919 16:54:30.579373 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:30.579383 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:30.582381 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:30.582407 85253 round_trippers.go:577] Response Headers:
I0919 16:54:30.582417 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:30.582425 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:30 GMT
I0919 16:54:30.582433 85253 round_trippers.go:580] Audit-Id: 27ab3e63-7b66-4e76-a41c-f899f34f2400
I0919 16:54:30.582441 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:30.582449 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:30.582460 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:30.583007 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:31.079729 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:31.079752 85253 round_trippers.go:469] Request Headers:
I0919 16:54:31.079761 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:31.079767 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:31.082734 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:31.082761 85253 round_trippers.go:577] Response Headers:
I0919 16:54:31.082771 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:31 GMT
I0919 16:54:31.082778 85253 round_trippers.go:580] Audit-Id: 55f1987f-797c-4780-8f49-8c8a3b5d5a84
I0919 16:54:31.082787 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:31.082799 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:31.082806 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:31.082811 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:31.082999 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:31.579772 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:31.579804 85253 round_trippers.go:469] Request Headers:
I0919 16:54:31.579819 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:31.579828 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:31.583902 85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0919 16:54:31.583923 85253 round_trippers.go:577] Response Headers:
I0919 16:54:31.583934 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:31.583941 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:31.583948 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:31.583955 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:31 GMT
I0919 16:54:31.583963 85253 round_trippers.go:580] Audit-Id: bce7e871-b64f-4aeb-8239-3182ddff19eb
I0919 16:54:31.583973 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:31.584282 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:32.078932 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:32.078959 85253 round_trippers.go:469] Request Headers:
I0919 16:54:32.078967 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:32.078974 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:32.081964 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:32.081989 85253 round_trippers.go:577] Response Headers:
I0919 16:54:32.081999 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:32 GMT
I0919 16:54:32.082006 85253 round_trippers.go:580] Audit-Id: 282aa01b-6444-4cea-90a4-5311b329b4c8
I0919 16:54:32.082013 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:32.082021 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:32.082028 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:32.082036 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:32.082352 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:32.579065 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:32.579091 85253 round_trippers.go:469] Request Headers:
I0919 16:54:32.579099 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:32.579105 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:32.581841 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:32.581863 85253 round_trippers.go:577] Response Headers:
I0919 16:54:32.581874 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:32.581881 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:32.581887 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:32.581895 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:32 GMT
I0919 16:54:32.581902 85253 round_trippers.go:580] Audit-Id: 6eb45b8a-87f1-4d09-8136-06e3fa57ffee
I0919 16:54:32.581911 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:32.582129 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:32.582461 85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
I0919 16:54:33.079239 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:33.079265 85253 round_trippers.go:469] Request Headers:
I0919 16:54:33.079273 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:33.079280 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:33.081960 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:33.081990 85253 round_trippers.go:577] Response Headers:
I0919 16:54:33.082000 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:33.082009 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:33 GMT
I0919 16:54:33.082016 85253 round_trippers.go:580] Audit-Id: 42048eca-92af-4241-b48a-5d619f95fda3
I0919 16:54:33.082022 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:33.082033 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:33.082041 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:33.082633 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:33.579652 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:33.579677 85253 round_trippers.go:469] Request Headers:
I0919 16:54:33.579685 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:33.579692 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:33.582613 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:33.582645 85253 round_trippers.go:577] Response Headers:
I0919 16:54:33.582656 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:33.582664 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:33.582672 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:33.582679 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:33 GMT
I0919 16:54:33.582684 85253 round_trippers.go:580] Audit-Id: f7161931-e757-40ba-9c09-e36dae1ea406
I0919 16:54:33.582689 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:33.582810 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:34.079390 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:34.079417 85253 round_trippers.go:469] Request Headers:
I0919 16:54:34.079425 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:34.079431 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:34.083942 85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0919 16:54:34.083966 85253 round_trippers.go:577] Response Headers:
I0919 16:54:34.083975 85253 round_trippers.go:580] Audit-Id: e4907963-6b2c-4f41-8dc2-76b1ec9a0d7c
I0919 16:54:34.083982 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:34.083990 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:34.083998 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:34.084006 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:34.084014 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:34 GMT
I0919 16:54:34.085255 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:34.578928 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:34.578953 85253 round_trippers.go:469] Request Headers:
I0919 16:54:34.578962 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:34.578968 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:34.581424 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:34.581448 85253 round_trippers.go:577] Response Headers:
I0919 16:54:34.581458 85253 round_trippers.go:580] Audit-Id: 588c2d7a-e397-42dc-93d4-0b3e6813a309
I0919 16:54:34.581467 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:34.581476 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:34.581487 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:34.581497 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:34.581505 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:34 GMT
I0919 16:54:34.582079 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:35.079820 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:35.079848 85253 round_trippers.go:469] Request Headers:
I0919 16:54:35.079863 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:35.079872 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:35.082774 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:35.082799 85253 round_trippers.go:577] Response Headers:
I0919 16:54:35.082815 85253 round_trippers.go:580] Audit-Id: 8e8bc260-e87b-4987-ae8d-88002347daad
I0919 16:54:35.082823 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:35.082832 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:35.082840 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:35.082851 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:35.082860 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:35 GMT
I0919 16:54:35.083149 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:35.083486 85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
I0919 16:54:35.578843 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:35.578868 85253 round_trippers.go:469] Request Headers:
I0919 16:54:35.578877 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:35.578882 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:35.581603 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:35.581632 85253 round_trippers.go:577] Response Headers:
I0919 16:54:35.581640 85253 round_trippers.go:580] Audit-Id: 5edbbc8d-502f-4a6c-b7ab-5513f2f2d8d0
I0919 16:54:35.581645 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:35.581650 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:35.581655 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:35.581660 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:35.581674 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:35 GMT
I0919 16:54:35.582048 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
I0919 16:54:36.079755 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:36.079776 85253 round_trippers.go:469] Request Headers:
I0919 16:54:36.079784 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:36.079790 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:36.083100 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:54:36.083127 85253 round_trippers.go:577] Response Headers:
I0919 16:54:36.083139 85253 round_trippers.go:580] Audit-Id: 6a68aebf-7a22-4516-ae15-d1414b4a173c
I0919 16:54:36.083149 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:36.083157 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:36.083168 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:36.083178 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:36.083187 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:36 GMT
I0919 16:54:36.083358 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
I0919 16:54:36.083705 85253 node_ready.go:49] node "multinode-415589" has status "Ready":"True"
I0919 16:54:36.083721 85253 node_ready.go:38] duration metric: took 12.510902115s waiting for node "multinode-415589" to be "Ready" ...
I0919 16:54:36.083730 85253 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0919 16:54:36.083829 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
I0919 16:54:36.083842 85253 round_trippers.go:469] Request Headers:
I0919 16:54:36.083853 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:36.083863 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:36.087213 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:54:36.087227 85253 round_trippers.go:577] Response Headers:
I0919 16:54:36.087234 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:36 GMT
I0919 16:54:36.087239 85253 round_trippers.go:580] Audit-Id: d1bfd11f-4c54-44b8-b002-9e829ff3ef66
I0919 16:54:36.087245 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:36.087250 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:36.087254 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:36.087260 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:36.089610 85253 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"393","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52583 chars]
I0919 16:54:36.092461 85253 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace to be "Ready" ...
I0919 16:54:36.092529 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
I0919 16:54:36.092537 85253 round_trippers.go:469] Request Headers:
I0919 16:54:36.092544 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:36.092550 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:36.095488 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:36.095504 85253 round_trippers.go:577] Response Headers:
I0919 16:54:36.095511 85253 round_trippers.go:580] Audit-Id: 718b961f-0e26-4bc8-9ac3-80cb3a3de233
I0919 16:54:36.095516 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:36.095521 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:36.095526 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:36.095531 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:36.095538 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:36 GMT
I0919 16:54:36.095683 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"393","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4762 chars]
I0919 16:54:36.096034 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:36.096044 85253 round_trippers.go:469] Request Headers:
I0919 16:54:36.096051 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:36.096056 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:36.097883 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:54:36.097898 85253 round_trippers.go:577] Response Headers:
I0919 16:54:36.097904 85253 round_trippers.go:580] Audit-Id: 64e54a98-8b5e-46dd-87ee-85735c0daf8d
I0919 16:54:36.097909 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:36.097914 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:36.097919 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:36.097923 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:36.097928 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:36 GMT
I0919 16:54:36.098091 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
I0919 16:54:36.098533 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
I0919 16:54:36.098549 85253 round_trippers.go:469] Request Headers:
I0919 16:54:36.098559 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:36.098568 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:36.107823 85253 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0919 16:54:36.107840 85253 round_trippers.go:577] Response Headers:
I0919 16:54:36.107847 85253 round_trippers.go:580] Audit-Id: 4b5248cc-edd6-4208-a977-e880d8144266
I0919 16:54:36.107852 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:36.107858 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:36.107862 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:36.107870 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:36.107878 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:36 GMT
I0919 16:54:36.108011 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
I0919 16:54:36.108477 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:36.108491 85253 round_trippers.go:469] Request Headers:
I0919 16:54:36.108498 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:36.108505 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:36.114196 85253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0919 16:54:36.114212 85253 round_trippers.go:577] Response Headers:
I0919 16:54:36.114220 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:36.114225 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:36.114230 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:36.114235 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:36 GMT
I0919 16:54:36.114241 85253 round_trippers.go:580] Audit-Id: 094831fc-42f2-4433-8e48-5d75cd8242c2
I0919 16:54:36.114248 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:36.114588 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
I0919 16:54:36.615484 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
I0919 16:54:36.615530 85253 round_trippers.go:469] Request Headers:
I0919 16:54:36.615544 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:36.615552 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:36.618536 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:36.618557 85253 round_trippers.go:577] Response Headers:
I0919 16:54:36.618564 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:36.618570 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:36 GMT
I0919 16:54:36.618575 85253 round_trippers.go:580] Audit-Id: 365dd419-91ce-4cf7-96c6-7990154ccda9
I0919 16:54:36.618580 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:36.618586 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:36.618591 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:36.618800 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
I0919 16:54:36.619414 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:36.619437 85253 round_trippers.go:469] Request Headers:
I0919 16:54:36.619448 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:36.619457 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:36.621668 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:36.621681 85253 round_trippers.go:577] Response Headers:
I0919 16:54:36.621687 85253 round_trippers.go:580] Audit-Id: f21b4c11-44e0-4b00-a2f3-291f1de2a4d3
I0919 16:54:36.621692 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:36.621697 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:36.621702 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:36.621707 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:36.621713 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:36 GMT
I0919 16:54:36.621933 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
I0919 16:54:37.115654 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
I0919 16:54:37.115678 85253 round_trippers.go:469] Request Headers:
I0919 16:54:37.115686 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:37.115695 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:37.120257 85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0919 16:54:37.120286 85253 round_trippers.go:577] Response Headers:
I0919 16:54:37.120298 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:37.120306 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:37.120315 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:37.120322 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:37.120331 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:37 GMT
I0919 16:54:37.120340 85253 round_trippers.go:580] Audit-Id: 48e29f15-3042-4a43-9b40-68ffd3961bf0
I0919 16:54:37.120555 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
I0919 16:54:37.121050 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:37.121065 85253 round_trippers.go:469] Request Headers:
I0919 16:54:37.121073 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:37.121079 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:37.123200 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:37.123221 85253 round_trippers.go:577] Response Headers:
I0919 16:54:37.123230 85253 round_trippers.go:580] Audit-Id: 17fac475-35f5-4c6a-82a9-7397eadbdc1e
I0919 16:54:37.123239 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:37.123247 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:37.123255 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:37.123267 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:37.123278 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:37 GMT
I0919 16:54:37.123510 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
I0919 16:54:37.615184 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
I0919 16:54:37.615207 85253 round_trippers.go:469] Request Headers:
I0919 16:54:37.615215 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:37.615221 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:37.617757 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:37.617780 85253 round_trippers.go:577] Response Headers:
I0919 16:54:37.617790 85253 round_trippers.go:580] Audit-Id: 31fd8e3e-7862-4124-960b-fadc32cfb060
I0919 16:54:37.617798 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:37.617806 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:37.617814 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:37.617823 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:37.617838 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:37 GMT
I0919 16:54:37.618051 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
I0919 16:54:37.618740 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:37.618761 85253 round_trippers.go:469] Request Headers:
I0919 16:54:37.618768 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:37.618774 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:37.620657 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:54:37.620674 85253 round_trippers.go:577] Response Headers:
I0919 16:54:37.620683 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:37.620691 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:37.620697 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:37 GMT
I0919 16:54:37.620705 85253 round_trippers.go:580] Audit-Id: 3256a8d7-1a8e-4d93-abfc-ec29d2085557
I0919 16:54:37.620712 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:37.620724 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:37.621012 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
I0919 16:54:38.115730 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
I0919 16:54:38.115757 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.115765 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.115771 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.118667 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:38.118680 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.118700 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.118706 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.118711 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.118716 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.118722 85253 round_trippers.go:580] Audit-Id: 0f533a70-b2fa-4c43-90ce-c9c29edae6e9
I0919 16:54:38.118728 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.119194 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
I0919 16:54:38.119735 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:38.119751 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.119759 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.119764 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.121980 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:38.121992 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.121998 85253 round_trippers.go:580] Audit-Id: fae37c84-b36f-4885-9fce-cc4cfc962e43
I0919 16:54:38.122003 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.122008 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.122013 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.122018 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.122023 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.122401 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
I0919 16:54:38.122698 85253 pod_ready.go:102] pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace has status "Ready":"False"
I0919 16:54:38.615039 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
I0919 16:54:38.615064 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.615072 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.615078 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.617651 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:38.617670 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.617677 85253 round_trippers.go:580] Audit-Id: 2ed5ac42-6de7-42cf-ac64-7eb14d20bdb4
I0919 16:54:38.617682 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.617687 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.617692 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.617697 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.617702 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.618121 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
I0919 16:54:38.618618 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:38.618633 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.618641 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.618646 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.620800 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:38.620812 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.620817 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.620822 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.620827 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.620833 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.620838 85253 round_trippers.go:580] Audit-Id: 2cf47788-16bc-4f31-926c-f530f19ac895
I0919 16:54:38.620842 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.621097 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
I0919 16:54:38.621359 85253 pod_ready.go:92] pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace has status "Ready":"True"
I0919 16:54:38.621374 85253 pod_ready.go:81] duration metric: took 2.528893039s waiting for pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.621382 85253 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.621428 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-415589
I0919 16:54:38.621436 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.621442 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.621448 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.623182 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:54:38.623192 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.623200 85253 round_trippers.go:580] Audit-Id: 5e3b578c-b0b1-46a9-9431-99415be92bb1
I0919 16:54:38.623205 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.623210 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.623215 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.623220 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.623225 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.623612 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-415589","namespace":"kube-system","uid":"1dbf3be3-1373-453b-a745-575b7f604586","resourceVersion":"383","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.11:2379","kubernetes.io/config.hash":"6df6017a63b31f0e4794b474c009f352","kubernetes.io/config.mirror":"6df6017a63b31f0e4794b474c009f352","kubernetes.io/config.seen":"2023-09-19T16:54:11.230739231Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
I0919 16:54:38.624077 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:38.624091 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.624098 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.624104 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.626064 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:54:38.626081 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.626090 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.626096 85253 round_trippers.go:580] Audit-Id: cc024073-c26c-41e6-8936-337ab34d4a34
I0919 16:54:38.626101 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.626107 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.626116 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.626125 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.626237 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
I0919 16:54:38.626554 85253 pod_ready.go:92] pod "etcd-multinode-415589" in "kube-system" namespace has status "Ready":"True"
I0919 16:54:38.626568 85253 pod_ready.go:81] duration metric: took 5.181196ms waiting for pod "etcd-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.626579 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.626637 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-415589
I0919 16:54:38.626648 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.626659 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.626667 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.628756 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:38.628770 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.628777 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.628782 85253 round_trippers.go:580] Audit-Id: 2d9cca00-0ce0-4f34-ae0e-bf946911fabe
I0919 16:54:38.628787 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.628792 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.628797 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.628802 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.629012 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-415589","namespace":"kube-system","uid":"4ecf615e-9f92-46f8-8b34-9de418bca0ac","resourceVersion":"384","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.11:8443","kubernetes.io/config.hash":"de462c90cfa089272f7e7f2885319010","kubernetes.io/config.mirror":"de462c90cfa089272f7e7f2885319010","kubernetes.io/config.seen":"2023-09-19T16:54:11.230732561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
I0919 16:54:38.629382 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:38.629395 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.629401 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.629407 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.631564 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:38.631584 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.631594 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.631603 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.631611 85253 round_trippers.go:580] Audit-Id: a5e77ed4-59e2-4092-aaca-ecff6790196e
I0919 16:54:38.631621 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.631634 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.631645 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.631775 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
I0919 16:54:38.632027 85253 pod_ready.go:92] pod "kube-apiserver-multinode-415589" in "kube-system" namespace has status "Ready":"True"
I0919 16:54:38.632041 85253 pod_ready.go:81] duration metric: took 5.455635ms waiting for pod "kube-apiserver-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.632053 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.632098 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-415589
I0919 16:54:38.632107 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.632117 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.632128 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.633909 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:54:38.633923 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.633931 85253 round_trippers.go:580] Audit-Id: 24dbf25a-9b34-4832-a490-7b0ad821ce97
I0919 16:54:38.633939 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.633947 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.633956 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.633973 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.633980 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.634158 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-415589","namespace":"kube-system","uid":"3b76511f-a4ea-484d-a0f7-6968c3abf350","resourceVersion":"385","creationTimestamp":"2023-09-19T16:54:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"504acb37dbf2142427850f2e779b05ad","kubernetes.io/config.mirror":"504acb37dbf2142427850f2e779b05ad","kubernetes.io/config.seen":"2023-09-19T16:54:02.792831460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
I0919 16:54:38.634515 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:38.634527 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.634534 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.634542 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.636075 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:54:38.636092 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.636101 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.636110 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.636124 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.636138 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.636144 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.636149 85253 round_trippers.go:580] Audit-Id: fd33917b-a4c4-4618-bdda-0f7d101290b3
I0919 16:54:38.636468 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
I0919 16:54:38.636738 85253 pod_ready.go:92] pod "kube-controller-manager-multinode-415589" in "kube-system" namespace has status "Ready":"True"
I0919 16:54:38.636752 85253 pod_ready.go:81] duration metric: took 4.691897ms waiting for pod "kube-controller-manager-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.636760 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6jtp" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.680048 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6jtp
I0919 16:54:38.680065 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.680073 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.680079 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.682274 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:38.682292 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.682301 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.682309 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.682316 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.682324 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.682333 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.682340 85253 round_trippers.go:580] Audit-Id: 1a137c0c-2ac7-46fd-91ae-1dd2d9d99601
I0919 16:54:38.683128 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6jtp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1f6a8f6-f608-4f79-9fd4-1a570bde14a6","resourceVersion":"376","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5f6891df-57ac-4a88-9703-82c35d43e2eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f6891df-57ac-4a88-9703-82c35d43e2eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
I0919 16:54:38.880061 85253 request.go:629] Waited for 196.562901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:38.880122 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:38.880127 85253 round_trippers.go:469] Request Headers:
I0919 16:54:38.880147 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:38.880153 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:38.882999 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:38.883018 85253 round_trippers.go:577] Response Headers:
I0919 16:54:38.883025 85253 round_trippers.go:580] Audit-Id: 907ec3af-7e5a-4499-a794-66f124d88879
I0919 16:54:38.883031 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:38.883036 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:38.883041 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:38.883047 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:38.883052 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:38 GMT
I0919 16:54:38.883274 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
I0919 16:54:38.883613 85253 pod_ready.go:92] pod "kube-proxy-r6jtp" in "kube-system" namespace has status "Ready":"True"
I0919 16:54:38.883629 85253 pod_ready.go:81] duration metric: took 246.863276ms waiting for pod "kube-proxy-r6jtp" in "kube-system" namespace to be "Ready" ...
I0919 16:54:38.883639 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:54:39.079985 85253 request.go:629] Waited for 196.278198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-415589
I0919 16:54:39.080058 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-415589
I0919 16:54:39.080064 85253 round_trippers.go:469] Request Headers:
I0919 16:54:39.080071 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:39.080078 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:39.082827 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:39.082844 85253 round_trippers.go:577] Response Headers:
I0919 16:54:39.082850 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:39.082858 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:39.082867 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:39.082875 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:39.082886 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:39 GMT
I0919 16:54:39.082894 85253 round_trippers.go:580] Audit-Id: 55f3aac3-fbc5-4c3a-a1e1-b724778bf564
I0919 16:54:39.083072 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-415589","namespace":"kube-system","uid":"6f43b8d1-3b77-4df6-8b66-7d08cf7c0682","resourceVersion":"362","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8d76d9bf6a9e2f131bdda3e4a41d04bb","kubernetes.io/config.mirror":"8d76d9bf6a9e2f131bdda3e4a41d04bb","kubernetes.io/config.seen":"2023-09-19T16:54:11.230737337Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
I0919 16:54:39.280803 85253 request.go:629] Waited for 197.345341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:39.280873 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:54:39.280878 85253 round_trippers.go:469] Request Headers:
I0919 16:54:39.280886 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:39.280891 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:39.283272 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:39.283292 85253 round_trippers.go:577] Response Headers:
I0919 16:54:39.283299 85253 round_trippers.go:580] Audit-Id: e53f0dde-13b1-4337-8c79-6b26e0e1862c
I0919 16:54:39.283304 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:39.283309 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:39.283317 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:39.283322 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:39.283329 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:39 GMT
I0919 16:54:39.283952 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
I0919 16:54:39.284237 85253 pod_ready.go:92] pod "kube-scheduler-multinode-415589" in "kube-system" namespace has status "Ready":"True"
I0919 16:54:39.284252 85253 pod_ready.go:81] duration metric: took 400.608428ms waiting for pod "kube-scheduler-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:54:39.284262 85253 pod_ready.go:38] duration metric: took 3.200522207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0919 16:54:39.284281 85253 api_server.go:52] waiting for apiserver process to appear ...
I0919 16:54:39.284327 85253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0919 16:54:39.297991 85253 command_runner.go:130] > 1917
I0919 16:54:39.298423 85253 api_server.go:72] duration metric: took 16.213590475s to wait for apiserver process to appear ...
I0919 16:54:39.298437 85253 api_server.go:88] waiting for apiserver healthz status ...
I0919 16:54:39.298456 85253 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
I0919 16:54:39.303494 85253 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
ok
I0919 16:54:39.303549 85253 round_trippers.go:463] GET https://192.168.50.11:8443/version
I0919 16:54:39.303557 85253 round_trippers.go:469] Request Headers:
I0919 16:54:39.303565 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:39.303571 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:39.304741 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:54:39.304756 85253 round_trippers.go:577] Response Headers:
I0919 16:54:39.304762 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:39 GMT
I0919 16:54:39.304767 85253 round_trippers.go:580] Audit-Id: c44e0f93-b99e-4aa0-a644-65bd8ca628c5
I0919 16:54:39.304773 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:39.304781 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:39.304795 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:39.304804 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:39.304812 85253 round_trippers.go:580] Content-Length: 263
I0919 16:54:39.304828 85253 request.go:1212] Response Body: {
"major": "1",
"minor": "28",
"gitVersion": "v1.28.2",
"gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
"gitTreeState": "clean",
"buildDate": "2023-09-13T09:29:07Z",
"goVersion": "go1.20.8",
"compiler": "gc",
"platform": "linux/amd64"
}
I0919 16:54:39.304891 85253 api_server.go:141] control plane version: v1.28.2
I0919 16:54:39.304906 85253 api_server.go:131] duration metric: took 6.464103ms to wait for apiserver health ...
I0919 16:54:39.304912 85253 system_pods.go:43] waiting for kube-system pods to appear ...
I0919 16:54:39.480329 85253 request.go:629] Waited for 175.327843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
I0919 16:54:39.480391 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
I0919 16:54:39.480396 85253 round_trippers.go:469] Request Headers:
I0919 16:54:39.480404 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:39.480410 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:39.483877 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:54:39.483891 85253 round_trippers.go:577] Response Headers:
I0919 16:54:39.483898 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:39.483904 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:39.483909 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:39 GMT
I0919 16:54:39.483914 85253 round_trippers.go:580] Audit-Id: cea226cf-6caf-4ada-ae95-4dbf03735241
I0919 16:54:39.483920 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:39.483925 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:39.485330 85253 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
I0919 16:54:39.487843 85253 system_pods.go:59] 8 kube-system pods found
I0919 16:54:39.487877 85253 system_pods.go:61] "coredns-5dd5756b68-ctsv5" [d4fcd880-e2ad-4d44-a070-e2af114e5e38] Running
I0919 16:54:39.487885 85253 system_pods.go:61] "etcd-multinode-415589" [1dbf3be3-1373-453b-a745-575b7f604586] Running
I0919 16:54:39.487892 85253 system_pods.go:61] "kindnet-w9q5z" [39f88f25-8a6e-475c-8ef1-77c9d289fd48] Running
I0919 16:54:39.487899 85253 system_pods.go:61] "kube-apiserver-multinode-415589" [4ecf615e-9f92-46f8-8b34-9de418bca0ac] Running
I0919 16:54:39.487910 85253 system_pods.go:61] "kube-controller-manager-multinode-415589" [3b76511f-a4ea-484d-a0f7-6968c3abf350] Running
I0919 16:54:39.487916 85253 system_pods.go:61] "kube-proxy-r6jtp" [a1f6a8f6-f608-4f79-9fd4-1a570bde14a6] Running
I0919 16:54:39.487922 85253 system_pods.go:61] "kube-scheduler-multinode-415589" [6f43b8d1-3b77-4df6-8b66-7d08cf7c0682] Running
I0919 16:54:39.487933 85253 system_pods.go:61] "storage-provisioner" [61db80e1-b248-49b3-aab0-4b70b4b47c51] Running
I0919 16:54:39.487941 85253 system_pods.go:74] duration metric: took 183.022751ms to wait for pod list to return data ...
I0919 16:54:39.487949 85253 default_sa.go:34] waiting for default service account to be created ...
I0919 16:54:39.679920 85253 request.go:629] Waited for 191.878504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/default/serviceaccounts
I0919 16:54:39.679987 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/default/serviceaccounts
I0919 16:54:39.679993 85253 round_trippers.go:469] Request Headers:
I0919 16:54:39.680000 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:39.680006 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:39.683028 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:54:39.683047 85253 round_trippers.go:577] Response Headers:
I0919 16:54:39.683054 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:39 GMT
I0919 16:54:39.683059 85253 round_trippers.go:580] Audit-Id: afea5b70-8aeb-46e3-aca9-6b193e268e6a
I0919 16:54:39.683065 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:39.683070 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:39.683075 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:39.683080 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:39.683089 85253 round_trippers.go:580] Content-Length: 261
I0919 16:54:39.683110 85253 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f02b0790-b184-491e-88ab-fc300c097bfd","resourceVersion":"303","creationTimestamp":"2023-09-19T16:54:22Z"}}]}
I0919 16:54:39.683392 85253 default_sa.go:45] found service account: "default"
I0919 16:54:39.683417 85253 default_sa.go:55] duration metric: took 195.462471ms for default service account to be created ...
I0919 16:54:39.683426 85253 system_pods.go:116] waiting for k8s-apps to be running ...
I0919 16:54:39.879812 85253 request.go:629] Waited for 196.308979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
I0919 16:54:39.879893 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
I0919 16:54:39.879898 85253 round_trippers.go:469] Request Headers:
I0919 16:54:39.879910 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:39.879917 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:39.884125 85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0919 16:54:39.884151 85253 round_trippers.go:577] Response Headers:
I0919 16:54:39.884162 85253 round_trippers.go:580] Audit-Id: 7033d54c-182b-4127-b32e-4b2b37e1441c
I0919 16:54:39.884171 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:39.884179 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:39.884188 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:39.884196 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:39.884204 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:39 GMT
I0919 16:54:39.885416 85253 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
I0919 16:54:39.887153 85253 system_pods.go:86] 8 kube-system pods found
I0919 16:54:39.887174 85253 system_pods.go:89] "coredns-5dd5756b68-ctsv5" [d4fcd880-e2ad-4d44-a070-e2af114e5e38] Running
I0919 16:54:39.887179 85253 system_pods.go:89] "etcd-multinode-415589" [1dbf3be3-1373-453b-a745-575b7f604586] Running
I0919 16:54:39.887183 85253 system_pods.go:89] "kindnet-w9q5z" [39f88f25-8a6e-475c-8ef1-77c9d289fd48] Running
I0919 16:54:39.887187 85253 system_pods.go:89] "kube-apiserver-multinode-415589" [4ecf615e-9f92-46f8-8b34-9de418bca0ac] Running
I0919 16:54:39.887195 85253 system_pods.go:89] "kube-controller-manager-multinode-415589" [3b76511f-a4ea-484d-a0f7-6968c3abf350] Running
I0919 16:54:39.887199 85253 system_pods.go:89] "kube-proxy-r6jtp" [a1f6a8f6-f608-4f79-9fd4-1a570bde14a6] Running
I0919 16:54:39.887207 85253 system_pods.go:89] "kube-scheduler-multinode-415589" [6f43b8d1-3b77-4df6-8b66-7d08cf7c0682] Running
I0919 16:54:39.887211 85253 system_pods.go:89] "storage-provisioner" [61db80e1-b248-49b3-aab0-4b70b4b47c51] Running
I0919 16:54:39.887221 85253 system_pods.go:126] duration metric: took 203.789788ms to wait for k8s-apps to be running ...
I0919 16:54:39.887230 85253 system_svc.go:44] waiting for kubelet service to be running ....
I0919 16:54:39.887282 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0919 16:54:39.903243 85253 system_svc.go:56] duration metric: took 16.000379ms WaitForService to wait for kubelet.
I0919 16:54:39.903270 85253 kubeadm.go:581] duration metric: took 16.818444013s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0919 16:54:39.903291 85253 node_conditions.go:102] verifying NodePressure condition ...
I0919 16:54:40.080803 85253 request.go:629] Waited for 177.36292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes
I0919 16:54:40.080866 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes
I0919 16:54:40.080871 85253 round_trippers.go:469] Request Headers:
I0919 16:54:40.080879 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:54:40.080885 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:54:40.083787 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:54:40.083808 85253 round_trippers.go:577] Response Headers:
I0919 16:54:40.083816 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:54:40 GMT
I0919 16:54:40.083821 85253 round_trippers.go:580] Audit-Id: 8632c30a-9fcb-4389-8d48-3a66b388a4d3
I0919 16:54:40.083826 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:54:40.083831 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:54:40.083836 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:54:40.083841 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:54:40.084014 85253 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
I0919 16:54:40.084498 85253 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0919 16:54:40.084523 85253 node_conditions.go:123] node cpu capacity is 2
I0919 16:54:40.084535 85253 node_conditions.go:105] duration metric: took 181.241026ms to run NodePressure ...
I0919 16:54:40.084547 85253 start.go:228] waiting for startup goroutines ...
I0919 16:54:40.084554 85253 start.go:233] waiting for cluster config update ...
I0919 16:54:40.084566 85253 start.go:242] writing updated cluster config ...
I0919 16:54:40.086745 85253 out.go:177]
I0919 16:54:40.088445 85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:54:40.088521 85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:54:40.090222 85253 out.go:177] * Starting worker node multinode-415589-m02 in cluster multinode-415589
I0919 16:54:40.091409 85253 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0919 16:54:40.091436 85253 cache.go:57] Caching tarball of preloaded images
I0919 16:54:40.091547 85253 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0919 16:54:40.091559 85253 cache.go:60] Finished verifying existence of preloaded tar for v1.28.2 on docker
I0919 16:54:40.091626 85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:54:40.091828 85253 start.go:365] acquiring machines lock for multinode-415589-m02: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0919 16:54:40.091874 85253 start.go:369] acquired machines lock for "multinode-415589-m02" in 26.599µs
I0919 16:54:40.091892 85253 start.go:93] Provisioning new machine with config: &{Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReque
sted:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0919 16:54:40.091957 85253 start.go:125] createHost starting for "m02" (driver="kvm2")
I0919 16:54:40.093606 85253 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
I0919 16:54:40.093699 85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:54:40.093737 85253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:54:40.108106 85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
I0919 16:54:40.108537 85253 main.go:141] libmachine: () Calling .GetVersion
I0919 16:54:40.109026 85253 main.go:141] libmachine: Using API Version 1
I0919 16:54:40.109054 85253 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:54:40.109446 85253 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:54:40.109641 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetMachineName
I0919 16:54:40.109802 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:54:40.109950 85253 start.go:159] libmachine.API.Create for "multinode-415589" (driver="kvm2")
I0919 16:54:40.109983 85253 client.go:168] LocalClient.Create starting
I0919 16:54:40.110028 85253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem
I0919 16:54:40.110060 85253 main.go:141] libmachine: Decoding PEM data...
I0919 16:54:40.110080 85253 main.go:141] libmachine: Parsing certificate...
I0919 16:54:40.110133 85253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem
I0919 16:54:40.110152 85253 main.go:141] libmachine: Decoding PEM data...
I0919 16:54:40.110164 85253 main.go:141] libmachine: Parsing certificate...
I0919 16:54:40.110181 85253 main.go:141] libmachine: Running pre-create checks...
I0919 16:54:40.110190 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .PreCreateCheck
I0919 16:54:40.110340 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetConfigRaw
I0919 16:54:40.110707 85253 main.go:141] libmachine: Creating machine...
I0919 16:54:40.110721 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .Create
I0919 16:54:40.110867 85253 main.go:141] libmachine: (multinode-415589-m02) Creating KVM machine...
I0919 16:54:40.112165 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found existing default KVM network
I0919 16:54:40.112351 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found existing private KVM network mk-multinode-415589
I0919 16:54:40.112469 85253 main.go:141] libmachine: (multinode-415589-m02) Setting up store path in /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02 ...
I0919 16:54:40.112500 85253 main.go:141] libmachine: (multinode-415589-m02) Building disk image from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
I0919 16:54:40.112551 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:40.112446 85623 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-65689/.minikube
I0919 16:54:40.112683 85253 main.go:141] libmachine: (multinode-415589-m02) Downloading /home/jenkins/minikube-integration/17240-65689/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
I0919 16:54:40.329687 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:40.329515 85623 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa...
I0919 16:54:40.643644 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:40.643501 85623 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/multinode-415589-m02.rawdisk...
I0919 16:54:40.643674 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Writing magic tar header
I0919 16:54:40.643686 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Writing SSH key tar header
I0919 16:54:40.643695 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:40.643608 85623 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02 ...
I0919 16:54:40.643708 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02
I0919 16:54:40.643836 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines
I0919 16:54:40.643871 85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02 (perms=drwx------)
I0919 16:54:40.643884 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube
I0919 16:54:40.643900 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689
I0919 16:54:40.643913 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0919 16:54:40.643926 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins
I0919 16:54:40.643938 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home
I0919 16:54:40.643953 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Skipping /home - not owner
I0919 16:54:40.643969 85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines (perms=drwxr-xr-x)
I0919 16:54:40.643985 85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube (perms=drwxr-xr-x)
I0919 16:54:40.643995 85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689 (perms=drwxrwxr-x)
I0919 16:54:40.644008 85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0919 16:54:40.644021 85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0919 16:54:40.644038 85253 main.go:141] libmachine: (multinode-415589-m02) Creating domain...
I0919 16:54:40.644874 85253 main.go:141] libmachine: (multinode-415589-m02) define libvirt domain using xml:
I0919 16:54:40.644898 85253 main.go:141] libmachine: (multinode-415589-m02) <domain type='kvm'>
I0919 16:54:40.644912 85253 main.go:141] libmachine: (multinode-415589-m02) <name>multinode-415589-m02</name>
I0919 16:54:40.644926 85253 main.go:141] libmachine: (multinode-415589-m02) <memory unit='MiB'>2200</memory>
I0919 16:54:40.644937 85253 main.go:141] libmachine: (multinode-415589-m02) <vcpu>2</vcpu>
I0919 16:54:40.644949 85253 main.go:141] libmachine: (multinode-415589-m02) <features>
I0919 16:54:40.644968 85253 main.go:141] libmachine: (multinode-415589-m02) <acpi/>
I0919 16:54:40.644985 85253 main.go:141] libmachine: (multinode-415589-m02) <apic/>
I0919 16:54:40.644999 85253 main.go:141] libmachine: (multinode-415589-m02) <pae/>
I0919 16:54:40.645011 85253 main.go:141] libmachine: (multinode-415589-m02)
I0919 16:54:40.645025 85253 main.go:141] libmachine: (multinode-415589-m02) </features>
I0919 16:54:40.645035 85253 main.go:141] libmachine: (multinode-415589-m02) <cpu mode='host-passthrough'>
I0919 16:54:40.645048 85253 main.go:141] libmachine: (multinode-415589-m02)
I0919 16:54:40.645063 85253 main.go:141] libmachine: (multinode-415589-m02) </cpu>
I0919 16:54:40.645077 85253 main.go:141] libmachine: (multinode-415589-m02) <os>
I0919 16:54:40.645090 85253 main.go:141] libmachine: (multinode-415589-m02) <type>hvm</type>
I0919 16:54:40.645103 85253 main.go:141] libmachine: (multinode-415589-m02) <boot dev='cdrom'/>
I0919 16:54:40.645116 85253 main.go:141] libmachine: (multinode-415589-m02) <boot dev='hd'/>
I0919 16:54:40.645130 85253 main.go:141] libmachine: (multinode-415589-m02) <bootmenu enable='no'/>
I0919 16:54:40.645145 85253 main.go:141] libmachine: (multinode-415589-m02) </os>
I0919 16:54:40.645159 85253 main.go:141] libmachine: (multinode-415589-m02) <devices>
I0919 16:54:40.645172 85253 main.go:141] libmachine: (multinode-415589-m02) <disk type='file' device='cdrom'>
I0919 16:54:40.645193 85253 main.go:141] libmachine: (multinode-415589-m02) <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/boot2docker.iso'/>
I0919 16:54:40.645206 85253 main.go:141] libmachine: (multinode-415589-m02) <target dev='hdc' bus='scsi'/>
I0919 16:54:40.645220 85253 main.go:141] libmachine: (multinode-415589-m02) <readonly/>
I0919 16:54:40.645230 85253 main.go:141] libmachine: (multinode-415589-m02) </disk>
I0919 16:54:40.645240 85253 main.go:141] libmachine: (multinode-415589-m02) <disk type='file' device='disk'>
I0919 16:54:40.645258 85253 main.go:141] libmachine: (multinode-415589-m02) <driver name='qemu' type='raw' cache='default' io='threads' />
I0919 16:54:40.645275 85253 main.go:141] libmachine: (multinode-415589-m02) <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/multinode-415589-m02.rawdisk'/>
I0919 16:54:40.645284 85253 main.go:141] libmachine: (multinode-415589-m02) <target dev='hda' bus='virtio'/>
I0919 16:54:40.645292 85253 main.go:141] libmachine: (multinode-415589-m02) </disk>
I0919 16:54:40.645298 85253 main.go:141] libmachine: (multinode-415589-m02) <interface type='network'>
I0919 16:54:40.645303 85253 main.go:141] libmachine: (multinode-415589-m02) <source network='mk-multinode-415589'/>
I0919 16:54:40.645309 85253 main.go:141] libmachine: (multinode-415589-m02) <model type='virtio'/>
I0919 16:54:40.645314 85253 main.go:141] libmachine: (multinode-415589-m02) </interface>
I0919 16:54:40.645321 85253 main.go:141] libmachine: (multinode-415589-m02) <interface type='network'>
I0919 16:54:40.645326 85253 main.go:141] libmachine: (multinode-415589-m02) <source network='default'/>
I0919 16:54:40.645332 85253 main.go:141] libmachine: (multinode-415589-m02) <model type='virtio'/>
I0919 16:54:40.645339 85253 main.go:141] libmachine: (multinode-415589-m02) </interface>
I0919 16:54:40.645349 85253 main.go:141] libmachine: (multinode-415589-m02) <serial type='pty'>
I0919 16:54:40.645358 85253 main.go:141] libmachine: (multinode-415589-m02) <target port='0'/>
I0919 16:54:40.645372 85253 main.go:141] libmachine: (multinode-415589-m02) </serial>
I0919 16:54:40.645384 85253 main.go:141] libmachine: (multinode-415589-m02) <console type='pty'>
I0919 16:54:40.645398 85253 main.go:141] libmachine: (multinode-415589-m02) <target type='serial' port='0'/>
I0919 16:54:40.645414 85253 main.go:141] libmachine: (multinode-415589-m02) </console>
I0919 16:54:40.645431 85253 main.go:141] libmachine: (multinode-415589-m02) <rng model='virtio'>
I0919 16:54:40.645444 85253 main.go:141] libmachine: (multinode-415589-m02) <backend model='random'>/dev/random</backend>
I0919 16:54:40.645458 85253 main.go:141] libmachine: (multinode-415589-m02) </rng>
I0919 16:54:40.645470 85253 main.go:141] libmachine: (multinode-415589-m02)
I0919 16:54:40.645536 85253 main.go:141] libmachine: (multinode-415589-m02)
I0919 16:54:40.645562 85253 main.go:141] libmachine: (multinode-415589-m02) </devices>
I0919 16:54:40.645574 85253 main.go:141] libmachine: (multinode-415589-m02) </domain>
I0919 16:54:40.645583 85253 main.go:141] libmachine: (multinode-415589-m02)
I0919 16:54:40.652507 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:03:87:04 in network default
I0919 16:54:40.652977 85253 main.go:141] libmachine: (multinode-415589-m02) Ensuring networks are active...
I0919 16:54:40.652999 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:40.653763 85253 main.go:141] libmachine: (multinode-415589-m02) Ensuring network default is active
I0919 16:54:40.654148 85253 main.go:141] libmachine: (multinode-415589-m02) Ensuring network mk-multinode-415589 is active
I0919 16:54:40.654518 85253 main.go:141] libmachine: (multinode-415589-m02) Getting domain xml...
I0919 16:54:40.655370 85253 main.go:141] libmachine: (multinode-415589-m02) Creating domain...
I0919 16:54:41.874765 85253 main.go:141] libmachine: (multinode-415589-m02) Waiting to get IP...
I0919 16:54:41.875680 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:41.876077 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:41.876100 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:41.876051 85623 retry.go:31] will retry after 197.512955ms: waiting for machine to come up
I0919 16:54:42.075574 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:42.075998 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:42.076029 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:42.075937 85623 retry.go:31] will retry after 386.1773ms: waiting for machine to come up
I0919 16:54:42.463825 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:42.464318 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:42.464354 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:42.464267 85623 retry.go:31] will retry after 394.663206ms: waiting for machine to come up
I0919 16:54:42.860862 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:42.861239 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:42.861275 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:42.861190 85623 retry.go:31] will retry after 474.519775ms: waiting for machine to come up
I0919 16:54:43.337444 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:43.337896 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:43.337930 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:43.337846 85623 retry.go:31] will retry after 572.54958ms: waiting for machine to come up
I0919 16:54:43.911505 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:43.911975 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:43.912001 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:43.911910 85623 retry.go:31] will retry after 839.255424ms: waiting for machine to come up
I0919 16:54:44.753032 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:44.753477 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:44.753506 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:44.753376 85623 retry.go:31] will retry after 1.021339087s: waiting for machine to come up
I0919 16:54:45.776541 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:45.776938 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:45.776973 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:45.776877 85623 retry.go:31] will retry after 1.408623312s: waiting for machine to come up
I0919 16:54:47.186977 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:47.187413 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:47.187447 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:47.187356 85623 retry.go:31] will retry after 1.375668679s: waiting for machine to come up
I0919 16:54:48.564941 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:48.565355 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:48.565387 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:48.565295 85623 retry.go:31] will retry after 2.222435737s: waiting for machine to come up
I0919 16:54:50.789090 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:50.789653 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:50.789692 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:50.789578 85623 retry.go:31] will retry after 2.067069722s: waiting for machine to come up
I0919 16:54:52.859900 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:52.860393 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:52.860424 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:52.860343 85623 retry.go:31] will retry after 3.562421103s: waiting for machine to come up
I0919 16:54:56.424446 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:56.424822 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:56.424854 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:56.424772 85623 retry.go:31] will retry after 3.449099167s: waiting for machine to come up
I0919 16:54:59.874985 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:54:59.875322 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
I0919 16:54:59.875354 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:59.875267 85623 retry.go:31] will retry after 5.18201167s: waiting for machine to come up
I0919 16:55:05.058472 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.058890 85253 main.go:141] libmachine: (multinode-415589-m02) Found IP for machine: 192.168.50.170
I0919 16:55:05.058918 85253 main.go:141] libmachine: (multinode-415589-m02) Reserving static IP address...
I0919 16:55:05.058937 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has current primary IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.059340 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find host DHCP lease matching {name: "multinode-415589-m02", mac: "52:54:00:33:e7:29", ip: "192.168.50.170"} in network mk-multinode-415589
I0919 16:55:05.132559 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Getting to WaitForSSH function...
I0919 16:55:05.132596 85253 main.go:141] libmachine: (multinode-415589-m02) Reserved static IP address: 192.168.50.170
I0919 16:55:05.132613 85253 main.go:141] libmachine: (multinode-415589-m02) Waiting for SSH to be available...
I0919 16:55:05.135279 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.135819 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:e7:29}
I0919 16:55:05.135846 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.136243 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Using SSH client type: external
I0919 16:55:05.136281 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa (-rw-------)
I0919 16:55:05.136333 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0919 16:55:05.136397 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | About to run SSH command:
I0919 16:55:05.136421 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | exit 0
I0919 16:55:05.233464 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | SSH cmd err, output: <nil>:
I0919 16:55:05.233738 85253 main.go:141] libmachine: (multinode-415589-m02) KVM machine creation complete!
I0919 16:55:05.234078 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetConfigRaw
I0919 16:55:05.234608 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:55:05.234845 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:55:05.235038 85253 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0919 16:55:05.235058 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetState
I0919 16:55:05.236255 85253 main.go:141] libmachine: Detecting operating system of created instance...
I0919 16:55:05.236273 85253 main.go:141] libmachine: Waiting for SSH to be available...
I0919 16:55:05.236283 85253 main.go:141] libmachine: Getting to WaitForSSH function...
I0919 16:55:05.236293 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:05.238813 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.239103 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:05.239144 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.239370 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:05.239547 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.239714 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.239879 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:05.240031 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:55:05.240419 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.170 22 <nil> <nil>}
I0919 16:55:05.240431 85253 main.go:141] libmachine: About to run SSH command:
exit 0
I0919 16:55:05.368698 85253 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0919 16:55:05.368732 85253 main.go:141] libmachine: Detecting the provisioner...
I0919 16:55:05.368745 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:05.371455 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.371842 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:05.371866 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.372002 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:05.372203 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.372347 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.372512 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:05.372681 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:55:05.373151 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.170 22 <nil> <nil>}
I0919 16:55:05.373171 85253 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0919 16:55:05.506724 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-gb090841-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0919 16:55:05.506783 85253 main.go:141] libmachine: found compatible host: buildroot
I0919 16:55:05.506791 85253 main.go:141] libmachine: Provisioning with buildroot...
I0919 16:55:05.506801 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetMachineName
I0919 16:55:05.507116 85253 buildroot.go:166] provisioning hostname "multinode-415589-m02"
I0919 16:55:05.507141 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetMachineName
I0919 16:55:05.507400 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:05.510018 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.510363 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:05.510397 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.510517 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:05.510735 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.510941 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.511107 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:05.511316 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:55:05.511620 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.170 22 <nil> <nil>}
I0919 16:55:05.511633 85253 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-415589-m02 && echo "multinode-415589-m02" | sudo tee /etc/hostname
I0919 16:55:05.659902 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-415589-m02
I0919 16:55:05.659931 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:05.663115 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.663485 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:05.663550 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.663700 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:05.663910 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.664061 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.664155 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:05.664348 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:55:05.664955 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.170 22 <nil> <nil>}
I0919 16:55:05.664993 85253 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-415589-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-415589-m02/g' /etc/hosts;
else
echo '127.0.1.1 multinode-415589-m02' | sudo tee -a /etc/hosts;
fi
fi
I0919 16:55:05.806167 85253 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0919 16:55:05.806204 85253 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
I0919 16:55:05.806225 85253 buildroot.go:174] setting up certificates
I0919 16:55:05.806233 85253 provision.go:83] configureAuth start
I0919 16:55:05.806242 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetMachineName
I0919 16:55:05.806556 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
I0919 16:55:05.808915 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.809245 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:05.809272 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.809424 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:05.811418 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.811864 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:05.811896 85253 provision.go:138] copyHostCerts
I0919 16:55:05.811905 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.811927 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
I0919 16:55:05.811968 85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
I0919 16:55:05.811982 85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
I0919 16:55:05.812052 85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
I0919 16:55:05.812145 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
I0919 16:55:05.812162 85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
I0919 16:55:05.812169 85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
I0919 16:55:05.812194 85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
I0919 16:55:05.812238 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
I0919 16:55:05.812260 85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
I0919 16:55:05.812267 85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
I0919 16:55:05.812289 85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
I0919 16:55:05.812333 85253 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.multinode-415589-m02 san=[192.168.50.170 192.168.50.170 localhost 127.0.0.1 minikube multinode-415589-m02]
I0919 16:55:05.959052 85253 provision.go:172] copyRemoteCerts
I0919 16:55:05.959128 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0919 16:55:05.959161 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:05.961903 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.962259 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:05.962297 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:05.962477 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:05.962680 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:05.962883 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:05.963072 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
I0919 16:55:06.058846 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0919 16:55:06.058913 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0919 16:55:06.082850 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0919 16:55:06.082914 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0919 16:55:06.106828 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem -> /etc/docker/server.pem
I0919 16:55:06.106896 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0919 16:55:06.131071 85253 provision.go:86] duration metric: configureAuth took 324.825149ms
I0919 16:55:06.131098 85253 buildroot.go:189] setting minikube options for container-runtime
I0919 16:55:06.131282 85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:55:06.131308 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:55:06.131618 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:06.133954 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:06.134405 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:06.134439 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:06.134616 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:06.134820 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:06.134976 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:06.135126 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:06.135352 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:55:06.135889 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.170 22 <nil> <nil>}
I0919 16:55:06.135912 85253 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0919 16:55:06.267202 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0919 16:55:06.267230 85253 buildroot.go:70] root file system type: tmpfs
I0919 16:55:06.267347 85253 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0919 16:55:06.267364 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:06.270085 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:06.270516 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:06.270549 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:06.270700 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:06.270896 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:06.271062 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:06.271216 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:06.271392 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:55:06.271698 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.170 22 <nil> <nil>}
I0919 16:55:06.271758 85253 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.50.11"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0919 16:55:06.414557 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.50.11
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0919 16:55:06.414600 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:06.417331 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:06.417735 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:06.417771 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:06.417971 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:06.418169 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:06.418364 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:06.418548 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:06.418708 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:55:06.419030 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.170 22 <nil> <nil>}
I0919 16:55:06.419058 85253 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0919 16:55:07.260954 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0919 16:55:07.260993 85253 main.go:141] libmachine: Checking connection to Docker...
I0919 16:55:07.261007 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetURL
I0919 16:55:07.262442 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Using libvirt version 6000000
I0919 16:55:07.264964 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.265364 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:07.265398 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.265602 85253 main.go:141] libmachine: Docker is up and running!
I0919 16:55:07.265631 85253 main.go:141] libmachine: Reticulating splines...
I0919 16:55:07.265640 85253 client.go:171] LocalClient.Create took 27.15564589s
I0919 16:55:07.265670 85253 start.go:167] duration metric: libmachine.API.Create for "multinode-415589" took 27.155721608s
I0919 16:55:07.265682 85253 start.go:300] post-start starting for "multinode-415589-m02" (driver="kvm2")
I0919 16:55:07.265698 85253 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0919 16:55:07.265718 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:55:07.265980 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0919 16:55:07.266012 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:07.268539 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.268971 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:07.269003 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.269164 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:07.269338 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:07.269516 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:07.269679 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
I0919 16:55:07.363688 85253 ssh_runner.go:195] Run: cat /etc/os-release
I0919 16:55:07.367667 85253 command_runner.go:130] > NAME=Buildroot
I0919 16:55:07.367687 85253 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
I0919 16:55:07.367693 85253 command_runner.go:130] > ID=buildroot
I0919 16:55:07.367702 85253 command_runner.go:130] > VERSION_ID=2021.02.12
I0919 16:55:07.367708 85253 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
I0919 16:55:07.367741 85253 info.go:137] Remote host: Buildroot 2021.02.12
I0919 16:55:07.367761 85253 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
I0919 16:55:07.367825 85253 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
I0919 16:55:07.367914 85253 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
I0919 16:55:07.367925 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> /etc/ssl/certs/733972.pem
I0919 16:55:07.368022 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0919 16:55:07.376930 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
I0919 16:55:07.397976 85253 start.go:303] post-start completed in 132.278721ms
I0919 16:55:07.398033 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetConfigRaw
I0919 16:55:07.398721 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
I0919 16:55:07.401557 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.401919 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:07.401957 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.402230 85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:55:07.402471 85253 start.go:128] duration metric: createHost completed in 27.310501904s
I0919 16:55:07.402501 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:07.404785 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.405072 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:07.405104 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.405260 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:07.405468 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:07.405653 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:07.405820 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:07.405986 85253 main.go:141] libmachine: Using SSH client type: native
I0919 16:55:07.406434 85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.170 22 <nil> <nil>}
I0919 16:55:07.406450 85253 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0919 16:55:07.538368 85253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142507.524227457
I0919 16:55:07.538410 85253 fix.go:206] guest clock: 1695142507.524227457
I0919 16:55:07.538421 85253 fix.go:219] Guest: 2023-09-19 16:55:07.524227457 +0000 UTC Remote: 2023-09-19 16:55:07.402485729 +0000 UTC m=+103.718930288 (delta=121.741728ms)
I0919 16:55:07.538443 85253 fix.go:190] guest clock delta is within tolerance: 121.741728ms
I0919 16:55:07.538451 85253 start.go:83] releasing machines lock for "multinode-415589-m02", held for 27.446568134s
I0919 16:55:07.538484 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:55:07.538804 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
I0919 16:55:07.541365 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.541741 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:07.541779 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.544321 85253 out.go:177] * Found network options:
I0919 16:55:07.545944 85253 out.go:177] - NO_PROXY=192.168.50.11
W0919 16:55:07.547076 85253 proxy.go:119] fail to check proxy env: Error ip not in block
I0919 16:55:07.547118 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:55:07.547619 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:55:07.547803 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
I0919 16:55:07.547934 85253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0919 16:55:07.547979 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
W0919 16:55:07.547996 85253 proxy.go:119] fail to check proxy env: Error ip not in block
I0919 16:55:07.548089 85253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0919 16:55:07.548113 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
I0919 16:55:07.550541 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.550853 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.550915 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:07.550954 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.551006 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:07.551207 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:07.551248 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:07.551307 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:07.551398 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:07.551420 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
I0919 16:55:07.551603 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
I0919 16:55:07.551621 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
I0919 16:55:07.551759 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
I0919 16:55:07.551907 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
I0919 16:55:07.644583 85253 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0919 16:55:07.644666 85253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0919 16:55:07.644741 85253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0919 16:55:07.672537 85253 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0919 16:55:07.673423 85253 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0919 16:55:07.673451 85253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0919 16:55:07.673465 85253 start.go:469] detecting cgroup driver to use...
I0919 16:55:07.673588 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:55:07.690687 85253 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0919 16:55:07.690788 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0919 16:55:07.699765 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0919 16:55:07.709776 85253 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0919 16:55:07.709833 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0919 16:55:07.719930 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:55:07.730515 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0919 16:55:07.741709 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:55:07.752639 85253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0919 16:55:07.762848 85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0919 16:55:07.773028 85253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0919 16:55:07.782140 85253 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
I0919 16:55:07.782227 85253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0919 16:55:07.791096 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:55:07.909109 85253 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0919 16:55:07.927205 85253 start.go:469] detecting cgroup driver to use...
I0919 16:55:07.927293 85253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0919 16:55:07.944704 85253 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0919 16:55:07.944767 85253 command_runner.go:130] > [Unit]
I0919 16:55:07.944777 85253 command_runner.go:130] > Description=Docker Application Container Engine
I0919 16:55:07.944782 85253 command_runner.go:130] > Documentation=https://docs.docker.com
I0919 16:55:07.944796 85253 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0919 16:55:07.944805 85253 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0919 16:55:07.944815 85253 command_runner.go:130] > StartLimitBurst=3
I0919 16:55:07.944823 85253 command_runner.go:130] > StartLimitIntervalSec=60
I0919 16:55:07.944830 85253 command_runner.go:130] > [Service]
I0919 16:55:07.944835 85253 command_runner.go:130] > Type=notify
I0919 16:55:07.944840 85253 command_runner.go:130] > Restart=on-failure
I0919 16:55:07.944845 85253 command_runner.go:130] > Environment=NO_PROXY=192.168.50.11
I0919 16:55:07.944852 85253 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0919 16:55:07.944863 85253 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0919 16:55:07.944870 85253 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0919 16:55:07.944877 85253 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0919 16:55:07.944887 85253 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0919 16:55:07.944897 85253 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0919 16:55:07.944904 85253 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0919 16:55:07.944915 85253 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0919 16:55:07.944922 85253 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0919 16:55:07.944926 85253 command_runner.go:130] > ExecStart=
I0919 16:55:07.944941 85253 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0919 16:55:07.944952 85253 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0919 16:55:07.944959 85253 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0919 16:55:07.944965 85253 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0919 16:55:07.944971 85253 command_runner.go:130] > LimitNOFILE=infinity
I0919 16:55:07.944975 85253 command_runner.go:130] > LimitNPROC=infinity
I0919 16:55:07.944981 85253 command_runner.go:130] > LimitCORE=infinity
I0919 16:55:07.944987 85253 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0919 16:55:07.944995 85253 command_runner.go:130] > # Only systemd 226 and above support this version.
I0919 16:55:07.944999 85253 command_runner.go:130] > TasksMax=infinity
I0919 16:55:07.945005 85253 command_runner.go:130] > TimeoutStartSec=0
I0919 16:55:07.945011 85253 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0919 16:55:07.945017 85253 command_runner.go:130] > Delegate=yes
I0919 16:55:07.945024 85253 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0919 16:55:07.945034 85253 command_runner.go:130] > KillMode=process
I0919 16:55:07.945038 85253 command_runner.go:130] > [Install]
I0919 16:55:07.945042 85253 command_runner.go:130] > WantedBy=multi-user.target
I0919 16:55:07.945402 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:55:07.961195 85253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0919 16:55:07.982944 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:55:07.995146 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:55:08.006161 85253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0919 16:55:08.038776 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:55:08.051734 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:55:08.068960 85253 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0919 16:55:08.069046 85253 ssh_runner.go:195] Run: which cri-dockerd
I0919 16:55:08.072702 85253 command_runner.go:130] > /usr/bin/cri-dockerd
I0919 16:55:08.072990 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0919 16:55:08.081489 85253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0919 16:55:08.099293 85253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0919 16:55:08.212384 85253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0919 16:55:08.322604 85253 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0919 16:55:08.322652 85253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0919 16:55:08.341858 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:55:08.445976 85253 ssh_runner.go:195] Run: sudo systemctl restart docker
I0919 16:55:09.852095 85253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.406075661s)
I0919 16:55:09.852167 85253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:55:09.953668 85253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0919 16:55:10.053750 85253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:55:10.170136 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:55:10.293259 85253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0919 16:55:10.309538 85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:55:10.428884 85253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0919 16:55:10.512855 85253 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0919 16:55:10.512943 85253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0919 16:55:10.518494 85253 command_runner.go:130] > File: /var/run/cri-dockerd.sock
I0919 16:55:10.518516 85253 command_runner.go:130] > Size: 0 Blocks: 0 IO Block: 4096 socket
I0919 16:55:10.518523 85253 command_runner.go:130] > Device: 16h/22d Inode: 880 Links: 1
I0919 16:55:10.518530 85253 command_runner.go:130] > Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 1000/ docker)
I0919 16:55:10.518535 85253 command_runner.go:130] > Access: 2023-09-19 16:55:10.432062468 +0000
I0919 16:55:10.518540 85253 command_runner.go:130] > Modify: 2023-09-19 16:55:10.432062468 +0000
I0919 16:55:10.518544 85253 command_runner.go:130] > Change: 2023-09-19 16:55:10.435065926 +0000
I0919 16:55:10.518548 85253 command_runner.go:130] > Birth: -
I0919 16:55:10.518864 85253 start.go:537] Will wait 60s for crictl version
I0919 16:55:10.518923 85253 ssh_runner.go:195] Run: which crictl
I0919 16:55:10.523255 85253 command_runner.go:130] > /usr/bin/crictl
I0919 16:55:10.523321 85253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0919 16:55:10.580320 85253 command_runner.go:130] > Version: 0.1.0
I0919 16:55:10.580350 85253 command_runner.go:130] > RuntimeName: docker
I0919 16:55:10.580459 85253 command_runner.go:130] > RuntimeVersion: 24.0.6
I0919 16:55:10.580480 85253 command_runner.go:130] > RuntimeApiVersion: v1
I0919 16:55:10.582456 85253 start.go:553] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.6
RuntimeApiVersion: v1
I0919 16:55:10.582536 85253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0919 16:55:10.608625 85253 command_runner.go:130] > 24.0.6
I0919 16:55:10.608742 85253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0919 16:55:10.632834 85253 command_runner.go:130] > 24.0.6
I0919 16:55:10.636360 85253 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
I0919 16:55:10.637795 85253 out.go:177] - env NO_PROXY=192.168.50.11
I0919 16:55:10.639243 85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
I0919 16:55:10.642029 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:10.642431 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
I0919 16:55:10.642462 85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
I0919 16:55:10.642670 85253 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0919 16:55:10.646718 85253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0919 16:55:10.659147 85253 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589 for IP: 192.168.50.170
I0919 16:55:10.659173 85253 certs.go:190] acquiring lock for shared ca certs: {Name:mkf975c4ed215d047afb89379d3c517cec3820b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0919 16:55:10.659326 85253 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key
I0919 16:55:10.659364 85253 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key
I0919 16:55:10.659377 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0919 16:55:10.659390 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0919 16:55:10.659406 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0919 16:55:10.659423 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0919 16:55:10.659493 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem (1338 bytes)
W0919 16:55:10.659550 85253 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397_empty.pem, impossibly tiny 0 bytes
I0919 16:55:10.659573 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem (1679 bytes)
I0919 16:55:10.659613 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem (1078 bytes)
I0919 16:55:10.659637 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem (1123 bytes)
I0919 16:55:10.659661 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem (1675 bytes)
I0919 16:55:10.659701 85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem (1708 bytes)
I0919 16:55:10.659730 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> /usr/share/ca-certificates/733972.pem
I0919 16:55:10.659743 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0919 16:55:10.659755 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem -> /usr/share/ca-certificates/73397.pem
I0919 16:55:10.660078 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0919 16:55:10.683241 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0919 16:55:10.705256 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0919 16:55:10.727098 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0919 16:55:10.749240 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /usr/share/ca-certificates/733972.pem (1708 bytes)
I0919 16:55:10.771451 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0919 16:55:10.793430 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem --> /usr/share/ca-certificates/73397.pem (1338 bytes)
I0919 16:55:10.815000 85253 ssh_runner.go:195] Run: openssl version
I0919 16:55:10.820161 85253 command_runner.go:130] > OpenSSL 1.1.1n 15 Mar 2022
I0919 16:55:10.820534 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/733972.pem && ln -fs /usr/share/ca-certificates/733972.pem /etc/ssl/certs/733972.pem"
I0919 16:55:10.830637 85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/733972.pem
I0919 16:55:10.835200 85253 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
I0919 16:55:10.835233 85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
I0919 16:55:10.835271 85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/733972.pem
I0919 16:55:10.840910 85253 command_runner.go:130] > 3ec20f2e
I0919 16:55:10.840965 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/733972.pem /etc/ssl/certs/3ec20f2e.0"
I0919 16:55:10.850956 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0919 16:55:10.861189 85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0919 16:55:10.865423 85253 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
I0919 16:55:10.865448 85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
I0919 16:55:10.865536 85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0919 16:55:10.870642 85253 command_runner.go:130] > b5213941
I0919 16:55:10.870991 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0919 16:55:10.881324 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73397.pem && ln -fs /usr/share/ca-certificates/73397.pem /etc/ssl/certs/73397.pem"
I0919 16:55:10.891328 85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73397.pem
I0919 16:55:10.895658 85253 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
I0919 16:55:10.895844 85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
I0919 16:55:10.895905 85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73397.pem
I0919 16:55:10.901023 85253 command_runner.go:130] > 51391683
I0919 16:55:10.901152 85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73397.pem /etc/ssl/certs/51391683.0"
I0919 16:55:10.911034 85253 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0919 16:55:10.914906 85253 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0919 16:55:10.915020 85253 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0919 16:55:10.915110 85253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0919 16:55:10.947075 85253 command_runner.go:130] > cgroupfs
I0919 16:55:10.947160 85253 cni.go:84] Creating CNI manager for ""
I0919 16:55:10.947178 85253 cni.go:136] 2 nodes found, recommending kindnet
I0919 16:55:10.947199 85253 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0919 16:55:10.947228 85253 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.170 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-415589 NodeName:multinode-415589-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0919 16:55:10.947359 85253 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.170
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "multinode-415589-m02"
kubeletExtraArgs:
node-ip: 192.168.50.170
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.11"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0919 16:55:10.947445 85253 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-415589-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.170
[Install]
config:
{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0919 16:55:10.947518 85253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
I0919 16:55:10.958393 85253 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
I0919 16:55:10.958441 85253 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.2: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
Initiating transfer...
I0919 16:55:10.958497 85253 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.2
I0919 16:55:10.968039 85253 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256
I0919 16:55:10.968050 85253 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubeadm
I0919 16:55:10.968055 85253 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubelet
I0919 16:55:10.968066 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubectl -> /var/lib/minikube/binaries/v1.28.2/kubectl
I0919 16:55:10.968137 85253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl
I0919 16:55:10.972967 85253 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
I0919 16:55:10.973002 85253 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
I0919 16:55:10.973020 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubectl --> /var/lib/minikube/binaries/v1.28.2/kubectl (49864704 bytes)
I0919 16:55:18.859012 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubeadm -> /var/lib/minikube/binaries/v1.28.2/kubeadm
I0919 16:55:18.859096 85253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm
I0919 16:55:18.864141 85253 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
I0919 16:55:18.864192 85253 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
I0919 16:55:18.864217 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubeadm --> /var/lib/minikube/binaries/v1.28.2/kubeadm (50757632 bytes)
I0919 16:55:19.885737 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0919 16:55:19.901648 85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubelet -> /var/lib/minikube/binaries/v1.28.2/kubelet
I0919 16:55:19.901759 85253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet
I0919 16:55:19.905958 85253 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
I0919 16:55:19.905997 85253 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
I0919 16:55:19.906028 85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubelet --> /var/lib/minikube/binaries/v1.28.2/kubelet (110776320 bytes)
I0919 16:55:20.423935 85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0919 16:55:20.433123 85253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
I0919 16:55:20.448768 85253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0919 16:55:20.464104 85253 ssh_runner.go:195] Run: grep 192.168.50.11 control-plane.minikube.internal$ /etc/hosts
I0919 16:55:20.467681 85253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.11 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0919 16:55:20.479501 85253 host.go:66] Checking if "multinode-415589" exists ...
I0919 16:55:20.479768 85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:55:20.479981 85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:55:20.480039 85253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:55:20.494283 85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
I0919 16:55:20.494711 85253 main.go:141] libmachine: () Calling .GetVersion
I0919 16:55:20.495164 85253 main.go:141] libmachine: Using API Version 1
I0919 16:55:20.495212 85253 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:55:20.495514 85253 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:55:20.495727 85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
I0919 16:55:20.495852 85253 start.go:304] JoinCluster: &{Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.50.170 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0919 16:55:20.495979 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
I0919 16:55:20.495998 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
I0919 16:55:20.499279 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:55:20.499710 85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
I0919 16:55:20.499741 85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
I0919 16:55:20.499859 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
I0919 16:55:20.500070 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
I0919 16:55:20.500246 85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
I0919 16:55:20.500397 85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
I0919 16:55:20.681527 85253 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lvxs0o.g54z5vfgz74yr442 --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510
I0919 16:55:20.681823 85253 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.50.170 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0919 16:55:20.681871 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lvxs0o.g54z5vfgz74yr442 --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-415589-m02"
I0919 16:55:20.723585 85253 command_runner.go:130] > [preflight] Running pre-flight checks
I0919 16:55:20.888718 85253 command_runner.go:130] > [preflight] Reading configuration from the cluster...
I0919 16:55:20.888750 85253 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0919 16:55:20.927308 85253 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0919 16:55:20.927341 85253 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0919 16:55:20.927350 85253 command_runner.go:130] > [kubelet-start] Starting the kubelet
I0919 16:55:21.049684 85253 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0919 16:55:23.092606 85253 command_runner.go:130] > This node has joined the cluster:
I0919 16:55:23.092638 85253 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
I0919 16:55:23.092650 85253 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
I0919 16:55:23.092660 85253 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
I0919 16:55:23.094423 85253 command_runner.go:130] ! W0919 16:55:20.719408 1164 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0919 16:55:23.094452 85253 command_runner.go:130] ! [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0919 16:55:23.094525 85253 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lvxs0o.g54z5vfgz74yr442 --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-415589-m02": (2.412618672s)
I0919 16:55:23.094564 85253 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
I0919 16:55:23.319102 85253 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
I0919 16:55:23.319150 85253 start.go:306] JoinCluster complete in 2.823297884s
I0919 16:55:23.319166 85253 cni.go:84] Creating CNI manager for ""
I0919 16:55:23.319183 85253 cni.go:136] 2 nodes found, recommending kindnet
I0919 16:55:23.319248 85253 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0919 16:55:23.324833 85253 command_runner.go:130] > File: /opt/cni/bin/portmap
I0919 16:55:23.324850 85253 command_runner.go:130] > Size: 2615256 Blocks: 5112 IO Block: 4096 regular file
I0919 16:55:23.324857 85253 command_runner.go:130] > Device: 11h/17d Inode: 3544 Links: 1
I0919 16:55:23.324863 85253 command_runner.go:130] > Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
I0919 16:55:23.324869 85253 command_runner.go:130] > Access: 2023-09-19 16:53:37.309210321 +0000
I0919 16:55:23.324874 85253 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
I0919 16:55:23.324882 85253 command_runner.go:130] > Change: 2023-09-19 16:53:35.557210321 +0000
I0919 16:55:23.324888 85253 command_runner.go:130] > Birth: -
I0919 16:55:23.325228 85253 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
I0919 16:55:23.325243 85253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0919 16:55:23.342847 85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0919 16:55:23.645641 85253 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
I0919 16:55:23.649658 85253 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
I0919 16:55:23.652505 85253 command_runner.go:130] > serviceaccount/kindnet unchanged
I0919 16:55:23.664138 85253 command_runner.go:130] > daemonset.apps/kindnet configured
I0919 16:55:23.667036 85253 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17240-65689/kubeconfig
I0919 16:55:23.667284 85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0919 16:55:23.667614 85253 round_trippers.go:463] GET https://192.168.50.11:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
I0919 16:55:23.667628 85253 round_trippers.go:469] Request Headers:
I0919 16:55:23.667639 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:23.667648 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:23.669483 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:55:23.669498 85253 round_trippers.go:577] Response Headers:
I0919 16:55:23.669505 85253 round_trippers.go:580] Content-Length: 291
I0919 16:55:23.669510 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:23 GMT
I0919 16:55:23.669516 85253 round_trippers.go:580] Audit-Id: a7e12e49-a619-4239-974e-6f74a31fab43
I0919 16:55:23.669521 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:23.669528 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:23.669537 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:23.669544 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:23.669600 85253 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"417","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
I0919 16:55:23.669717 85253 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-415589" context rescaled to 1 replicas
I0919 16:55:23.669749 85253 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.50.170 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
I0919 16:55:23.672301 85253 out.go:177] * Verifying Kubernetes components...
I0919 16:55:23.674012 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0919 16:55:23.687762 85253 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/17240-65689/kubeconfig
I0919 16:55:23.688061 85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0919 16:55:23.688313 85253 node_ready.go:35] waiting up to 6m0s for node "multinode-415589-m02" to be "Ready" ...
I0919 16:55:23.688375 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:23.688382 85253 round_trippers.go:469] Request Headers:
I0919 16:55:23.688390 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:23.688396 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:23.694935 85253 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0919 16:55:23.694954 85253 round_trippers.go:577] Response Headers:
I0919 16:55:23.694962 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:23.694967 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:23.694972 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:23 GMT
I0919 16:55:23.694977 85253 round_trippers.go:580] Audit-Id: 91bb95cb-b0fc-4cff-851c-378e69b586cd
I0919 16:55:23.694983 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:23.694988 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:23.694992 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:23.695417 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:23.695702 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:23.695715 85253 round_trippers.go:469] Request Headers:
I0919 16:55:23.695726 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:23.695734 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:23.701000 85253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0919 16:55:23.701017 85253 round_trippers.go:577] Response Headers:
I0919 16:55:23.701024 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:23.701032 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:23.701038 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:23.701043 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:23 GMT
I0919 16:55:23.701048 85253 round_trippers.go:580] Audit-Id: c37695e3-053c-450c-b5c4-e474a565f6e3
I0919 16:55:23.701056 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:23.701063 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:23.701188 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:24.201546 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:24.201569 85253 round_trippers.go:469] Request Headers:
I0919 16:55:24.201577 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:24.201583 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:24.205054 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:24.205078 85253 round_trippers.go:577] Response Headers:
I0919 16:55:24.205091 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:24.205101 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:24.205108 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:24.205115 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:24.205123 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:24.205131 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:24 GMT
I0919 16:55:24.205144 85253 round_trippers.go:580] Audit-Id: 553ada15-eabd-4f27-8bb5-2cb7cf80744d
I0919 16:55:24.205191 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:24.701770 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:24.701793 85253 round_trippers.go:469] Request Headers:
I0919 16:55:24.701801 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:24.701807 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:24.704734 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:24.704759 85253 round_trippers.go:577] Response Headers:
I0919 16:55:24.704771 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:24 GMT
I0919 16:55:24.704779 85253 round_trippers.go:580] Audit-Id: 914d3424-3a8f-4709-aa7e-c334805d8933
I0919 16:55:24.704784 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:24.704789 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:24.704794 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:24.704799 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:24.704804 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:24.705027 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:25.202202 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:25.202225 85253 round_trippers.go:469] Request Headers:
I0919 16:55:25.202233 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:25.202240 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:25.205102 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:25.205126 85253 round_trippers.go:577] Response Headers:
I0919 16:55:25.205135 85253 round_trippers.go:580] Audit-Id: b58153d5-0210-4178-9bae-c6b41fffb1c7
I0919 16:55:25.205143 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:25.205150 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:25.205158 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:25.205165 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:25.205173 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:25.205183 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:25 GMT
I0919 16:55:25.205353 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:25.702600 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:25.702623 85253 round_trippers.go:469] Request Headers:
I0919 16:55:25.702632 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:25.702638 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:25.705574 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:25.705604 85253 round_trippers.go:577] Response Headers:
I0919 16:55:25.705630 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:25.705641 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:25.705651 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:25.705660 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:25 GMT
I0919 16:55:25.705672 85253 round_trippers.go:580] Audit-Id: 0d3e4335-21aa-44d4-98af-f7ac5363b6ee
I0919 16:55:25.705681 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:25.705697 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:25.705856 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:25.706142 85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
I0919 16:55:26.202555 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:26.202580 85253 round_trippers.go:469] Request Headers:
I0919 16:55:26.202589 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:26.202597 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:26.205487 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:26.205505 85253 round_trippers.go:577] Response Headers:
I0919 16:55:26.205513 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:26.205518 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:26.205523 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:26 GMT
I0919 16:55:26.205528 85253 round_trippers.go:580] Audit-Id: b5cef305-be4e-41f3-b406-55ce5f65d0ea
I0919 16:55:26.205534 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:26.205544 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:26.205558 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:26.205777 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:26.702435 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:26.702459 85253 round_trippers.go:469] Request Headers:
I0919 16:55:26.702467 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:26.702474 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:26.705052 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:26.705073 85253 round_trippers.go:577] Response Headers:
I0919 16:55:26.705080 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:26 GMT
I0919 16:55:26.705085 85253 round_trippers.go:580] Audit-Id: 870a34de-2f2b-4b18-baec-d9a4b057af16
I0919 16:55:26.705090 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:26.705095 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:26.705101 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:26.705110 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:26.705115 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:26.705154 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:27.201782 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:27.201810 85253 round_trippers.go:469] Request Headers:
I0919 16:55:27.201823 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:27.201833 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:27.206925 85253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0919 16:55:27.206957 85253 round_trippers.go:577] Response Headers:
I0919 16:55:27.206970 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:27 GMT
I0919 16:55:27.206982 85253 round_trippers.go:580] Audit-Id: 4debd3c4-57c9-41ad-89df-f25ae5e392d9
I0919 16:55:27.206992 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:27.207002 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:27.207017 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:27.207031 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:27.207045 85253 round_trippers.go:580] Content-Length: 3485
I0919 16:55:27.207221 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
I0919 16:55:27.701860 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:27.701891 85253 round_trippers.go:469] Request Headers:
I0919 16:55:27.701905 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:27.701917 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:27.705535 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:27.705561 85253 round_trippers.go:577] Response Headers:
I0919 16:55:27.705571 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:27.705580 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:27.705587 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:27.705596 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:27.705603 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:27.705660 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:27 GMT
I0919 16:55:27.705680 85253 round_trippers.go:580] Audit-Id: 74bff726-eea9-44ef-afbf-c9a8e94a6518
I0919 16:55:27.705843 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:27.706167 85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
I0919 16:55:28.202443 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:28.202466 85253 round_trippers.go:469] Request Headers:
I0919 16:55:28.202475 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:28.202484 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:28.206034 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:28.206051 85253 round_trippers.go:577] Response Headers:
I0919 16:55:28.206058 85253 round_trippers.go:580] Audit-Id: cd4347d4-f582-4623-9efb-82e18bed2113
I0919 16:55:28.206063 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:28.206068 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:28.206073 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:28.206078 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:28.206083 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:28.206089 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:28 GMT
I0919 16:55:28.206145 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:28.701820 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:28.701857 85253 round_trippers.go:469] Request Headers:
I0919 16:55:28.701869 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:28.701879 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:28.704940 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:28.704966 85253 round_trippers.go:577] Response Headers:
I0919 16:55:28.704977 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:28.704985 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:28.704993 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:28 GMT
I0919 16:55:28.705002 85253 round_trippers.go:580] Audit-Id: 4c79e7ac-32e0-4d36-be6a-d6e214398f2e
I0919 16:55:28.705010 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:28.705022 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:28.705030 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:28.705124 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:29.202507 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:29.202539 85253 round_trippers.go:469] Request Headers:
I0919 16:55:29.202551 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:29.202560 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:29.205144 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:29.205168 85253 round_trippers.go:577] Response Headers:
I0919 16:55:29.205179 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:29.205188 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:29.205196 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:29.205204 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:29 GMT
I0919 16:55:29.205217 85253 round_trippers.go:580] Audit-Id: d2442b67-4b5c-4047-9ccc-4d9bdfe470f9
I0919 16:55:29.205225 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:29.205244 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:29.205340 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:29.701809 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:29.701832 85253 round_trippers.go:469] Request Headers:
I0919 16:55:29.701841 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:29.701847 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:29.704600 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:29.704631 85253 round_trippers.go:577] Response Headers:
I0919 16:55:29.704642 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:29.704652 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:29.704664 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:29.704687 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:29 GMT
I0919 16:55:29.704701 85253 round_trippers.go:580] Audit-Id: 619cabf8-0efc-437c-b0c4-6b97d754001c
I0919 16:55:29.704713 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:29.704725 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:29.704806 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:30.201674 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:30.201697 85253 round_trippers.go:469] Request Headers:
I0919 16:55:30.201708 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:30.201716 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:30.205070 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:30.205096 85253 round_trippers.go:577] Response Headers:
I0919 16:55:30.205105 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:30.205113 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:30.205120 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:30.205128 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:30.205138 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:30 GMT
I0919 16:55:30.205151 85253 round_trippers.go:580] Audit-Id: 47a55611-c403-4cd7-878e-a97f18395a5b
I0919 16:55:30.205158 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:30.205275 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:30.205565 85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
I0919 16:55:30.701581 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:30.701606 85253 round_trippers.go:469] Request Headers:
I0919 16:55:30.701630 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:30.701640 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:30.704917 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:30.704940 85253 round_trippers.go:577] Response Headers:
I0919 16:55:30.704954 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:30.704962 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:30.704970 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:30.704978 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:30.704984 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:30 GMT
I0919 16:55:30.704992 85253 round_trippers.go:580] Audit-Id: 6a173b36-96dd-446f-b1af-f195a7f9d5ee
I0919 16:55:30.705001 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:30.705082 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:31.201575 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:31.201601 85253 round_trippers.go:469] Request Headers:
I0919 16:55:31.201610 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:31.201627 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:31.204728 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:31.204745 85253 round_trippers.go:577] Response Headers:
I0919 16:55:31.204752 85253 round_trippers.go:580] Audit-Id: 7bc97be9-7e6e-4bc6-b1b0-05556c1990f2
I0919 16:55:31.204761 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:31.204767 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:31.204772 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:31.204777 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:31.204782 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:31.204792 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:31 GMT
I0919 16:55:31.204851 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:31.702519 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:31.702543 85253 round_trippers.go:469] Request Headers:
I0919 16:55:31.702552 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:31.702558 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:31.705585 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:31.705597 85253 round_trippers.go:577] Response Headers:
I0919 16:55:31.705603 85253 round_trippers.go:580] Audit-Id: b8069767-ee39-44b0-8935-e099e18f543b
I0919 16:55:31.705608 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:31.705632 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:31.705641 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:31.705651 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:31.705663 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:31.705671 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:31 GMT
I0919 16:55:31.705742 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:32.202360 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:32.202383 85253 round_trippers.go:469] Request Headers:
I0919 16:55:32.202391 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:32.202397 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:32.205402 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:32.205430 85253 round_trippers.go:577] Response Headers:
I0919 16:55:32.205441 85253 round_trippers.go:580] Audit-Id: 880c8325-06a4-4df9-803a-6ee6b237c8fe
I0919 16:55:32.205450 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:32.205459 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:32.205471 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:32.205482 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:32.205490 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:32.205501 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:32 GMT
I0919 16:55:32.205598 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:32.205944 85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
I0919 16:55:32.701824 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:32.701848 85253 round_trippers.go:469] Request Headers:
I0919 16:55:32.701860 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:32.701868 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:32.705125 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:32.705150 85253 round_trippers.go:577] Response Headers:
I0919 16:55:32.705167 85253 round_trippers.go:580] Content-Length: 3594
I0919 16:55:32.705176 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:32 GMT
I0919 16:55:32.705188 85253 round_trippers.go:580] Audit-Id: 70bca672-9e67-41b6-aefa-8bd9c065262c
I0919 16:55:32.705198 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:32.705208 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:32.705216 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:32.705231 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:32.705346 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
I0919 16:55:33.202224 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:33.202249 85253 round_trippers.go:469] Request Headers:
I0919 16:55:33.202257 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:33.202263 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:33.206515 85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0919 16:55:33.206537 85253 round_trippers.go:577] Response Headers:
I0919 16:55:33.206547 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:33 GMT
I0919 16:55:33.206554 85253 round_trippers.go:580] Audit-Id: d5bb848b-69da-4afc-8838-0db90c489392
I0919 16:55:33.206567 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:33.206574 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:33.206583 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:33.206592 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:33.206602 85253 round_trippers.go:580] Content-Length: 3863
I0919 16:55:33.206854 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
I0919 16:55:33.701678 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:33.701699 85253 round_trippers.go:469] Request Headers:
I0919 16:55:33.701707 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:33.701719 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:33.705206 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:33.705231 85253 round_trippers.go:577] Response Headers:
I0919 16:55:33.705251 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:33.705259 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:33.705265 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:33.705274 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:33.705287 85253 round_trippers.go:580] Content-Length: 3863
I0919 16:55:33.705315 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:33 GMT
I0919 16:55:33.705326 85253 round_trippers.go:580] Audit-Id: 3182db7b-4fbd-47bb-9a32-780c008cf00f
I0919 16:55:33.705417 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
I0919 16:55:34.201682 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:34.201708 85253 round_trippers.go:469] Request Headers:
I0919 16:55:34.201716 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:34.201722 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:34.205796 85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0919 16:55:34.205827 85253 round_trippers.go:577] Response Headers:
I0919 16:55:34.205838 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:34.205846 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:34.205851 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:34.205858 85253 round_trippers.go:580] Content-Length: 3863
I0919 16:55:34.205863 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:34 GMT
I0919 16:55:34.205871 85253 round_trippers.go:580] Audit-Id: 95390aab-08bd-4fb1-b1d7-3691517f17a2
I0919 16:55:34.205879 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:34.206026 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
I0919 16:55:34.206278 85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
I0919 16:55:34.702549 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:34.702572 85253 round_trippers.go:469] Request Headers:
I0919 16:55:34.702582 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:34.702588 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:34.705324 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:34.705350 85253 round_trippers.go:577] Response Headers:
I0919 16:55:34.705360 85253 round_trippers.go:580] Audit-Id: d35964ff-a80b-4162-bae9-99046d34d339
I0919 16:55:34.705369 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:34.705382 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:34.705399 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:34.705411 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:34.705422 85253 round_trippers.go:580] Content-Length: 3863
I0919 16:55:34.705434 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:34 GMT
I0919 16:55:34.705538 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
I0919 16:55:35.202100 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:35.202123 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.202132 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.202137 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.205833 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:35.205848 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.205854 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.205860 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.205866 85253 round_trippers.go:580] Content-Length: 3863
I0919 16:55:35.205871 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.205876 85253 round_trippers.go:580] Audit-Id: d774542d-67bc-47da-8f82-f18b224250a6
I0919 16:55:35.205881 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.205887 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.205951 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
I0919 16:55:35.702144 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:35.702168 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.702177 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.702182 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.706480 85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0919 16:55:35.706504 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.706512 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.706517 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.706523 85253 round_trippers.go:580] Content-Length: 3729
I0919 16:55:35.706529 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.706534 85253 round_trippers.go:580] Audit-Id: 0721d636-e7bc-40dc-9010-597bfe183d1f
I0919 16:55:35.706542 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.706548 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.706619 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"509","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2705 chars]
I0919 16:55:35.706865 85253 node_ready.go:49] node "multinode-415589-m02" has status "Ready":"True"
I0919 16:55:35.706879 85253 node_ready.go:38] duration metric: took 12.01855268s waiting for node "multinode-415589-m02" to be "Ready" ...
I0919 16:55:35.706891 85253 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0919 16:55:35.706946 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
I0919 16:55:35.706954 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.706961 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.706966 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.712976 85253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0919 16:55:35.712999 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.713009 85253 round_trippers.go:580] Audit-Id: eb85a93e-ad5e-4343-b018-623ce9a1e5b4
I0919 16:55:35.713015 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.713020 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.713025 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.713030 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.713035 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.721118 85253 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67482 chars]
I0919 16:55:35.723180 85253 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.723255 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
I0919 16:55:35.723263 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.723270 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.723276 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.726205 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:35.726223 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.726233 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.726240 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.726251 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.726260 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.726265 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.726270 85253 round_trippers.go:580] Audit-Id: 1ca77441-e6bb-4d84-ae80-c1964628ed16
I0919 16:55:35.726963 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
I0919 16:55:35.727353 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:55:35.727368 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.727374 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.727380 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.731047 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:35.731061 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.731067 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.731073 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.731081 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.731090 85253 round_trippers.go:580] Audit-Id: 4355d6a0-01aa-4f79-b54a-fc7b054d228c
I0919 16:55:35.731102 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.731116 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.731197 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
I0919 16:55:35.731458 85253 pod_ready.go:92] pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace has status "Ready":"True"
I0919 16:55:35.731470 85253 pod_ready.go:81] duration metric: took 8.270533ms waiting for pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.731477 85253 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.731520 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-415589
I0919 16:55:35.731528 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.731534 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.731540 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.733436 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:55:35.733455 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.733463 85253 round_trippers.go:580] Audit-Id: 53bc64f4-ab9d-4976-b10a-7446df67b0a3
I0919 16:55:35.733471 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.733478 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.733489 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.733496 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.733508 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.734404 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-415589","namespace":"kube-system","uid":"1dbf3be3-1373-453b-a745-575b7f604586","resourceVersion":"383","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.11:2379","kubernetes.io/config.hash":"6df6017a63b31f0e4794b474c009f352","kubernetes.io/config.mirror":"6df6017a63b31f0e4794b474c009f352","kubernetes.io/config.seen":"2023-09-19T16:54:11.230739231Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
I0919 16:55:35.734838 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:55:35.734852 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.734859 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.734865 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.736677 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:55:35.736690 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.736696 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.736701 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.736706 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.736714 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.736725 85253 round_trippers.go:580] Audit-Id: 6e8160b3-094d-4760-858f-4ab6c86ef72b
I0919 16:55:35.736730 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.736867 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
I0919 16:55:35.737223 85253 pod_ready.go:92] pod "etcd-multinode-415589" in "kube-system" namespace has status "Ready":"True"
I0919 16:55:35.737241 85253 pod_ready.go:81] duration metric: took 5.758956ms waiting for pod "etcd-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.737253 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.737301 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-415589
I0919 16:55:35.737308 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.737315 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.737321 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.739057 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:55:35.739071 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.739076 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.739082 85253 round_trippers.go:580] Audit-Id: d807ee33-902c-4e1e-993d-07a3e1463870
I0919 16:55:35.739087 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.739103 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.739115 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.739123 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.739287 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-415589","namespace":"kube-system","uid":"4ecf615e-9f92-46f8-8b34-9de418bca0ac","resourceVersion":"384","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.11:8443","kubernetes.io/config.hash":"de462c90cfa089272f7e7f2885319010","kubernetes.io/config.mirror":"de462c90cfa089272f7e7f2885319010","kubernetes.io/config.seen":"2023-09-19T16:54:11.230732561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
I0919 16:55:35.739724 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:55:35.739737 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.739750 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.739767 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.741561 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:55:35.741581 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.741590 85253 round_trippers.go:580] Audit-Id: 712bc466-fd71-4c4d-b5ee-1fb3befb699f
I0919 16:55:35.741598 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.741605 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.741627 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.741640 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.741647 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.741854 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
I0919 16:55:35.742136 85253 pod_ready.go:92] pod "kube-apiserver-multinode-415589" in "kube-system" namespace has status "Ready":"True"
I0919 16:55:35.742150 85253 pod_ready.go:81] duration metric: took 4.886937ms waiting for pod "kube-apiserver-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.742160 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.742206 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-415589
I0919 16:55:35.742215 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.742226 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.742234 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.744044 85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0919 16:55:35.744063 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.744072 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.744079 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.744088 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.744096 85253 round_trippers.go:580] Audit-Id: 65f50667-96fc-49ce-ae21-868d02a7f1fd
I0919 16:55:35.744105 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.744116 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.744301 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-415589","namespace":"kube-system","uid":"3b76511f-a4ea-484d-a0f7-6968c3abf350","resourceVersion":"385","creationTimestamp":"2023-09-19T16:54:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"504acb37dbf2142427850f2e779b05ad","kubernetes.io/config.mirror":"504acb37dbf2142427850f2e779b05ad","kubernetes.io/config.seen":"2023-09-19T16:54:02.792831460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
I0919 16:55:35.744623 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:55:35.744634 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.744640 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.744646 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.746763 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:35.746780 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.746788 85253 round_trippers.go:580] Audit-Id: f5322e70-d174-4867-af35-9f447ec402d7
I0919 16:55:35.746796 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.746807 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.746822 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.746835 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.746840 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.747018 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
I0919 16:55:35.747284 85253 pod_ready.go:92] pod "kube-controller-manager-multinode-415589" in "kube-system" namespace has status "Ready":"True"
I0919 16:55:35.747298 85253 pod_ready.go:81] duration metric: took 5.131834ms waiting for pod "kube-controller-manager-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.747307 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxjql" in "kube-system" namespace to be "Ready" ...
I0919 16:55:35.902711 85253 request.go:629] Waited for 155.344797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxjql
I0919 16:55:35.902803 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxjql
I0919 16:55:35.902815 85253 round_trippers.go:469] Request Headers:
I0919 16:55:35.902825 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:35.902832 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:35.906200 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:35.906224 85253 round_trippers.go:577] Response Headers:
I0919 16:55:35.906234 85253 round_trippers.go:580] Audit-Id: 9d6da219-1207-4844-baa8-86ae307f47b6
I0919 16:55:35.906241 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:35.906249 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:35.906255 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:35.906261 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:35.906266 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:35 GMT
I0919 16:55:35.906801 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxjql","generateName":"kube-proxy-","namespace":"kube-system","uid":"6cebe5c5-4e29-4835-84b9-057c096c799a","resourceVersion":"495","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5f6891df-57ac-4a88-9703-82c35d43e2eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f6891df-57ac-4a88-9703-82c35d43e2eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
I0919 16:55:36.102730 85253 request.go:629] Waited for 195.394934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:36.102818 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
I0919 16:55:36.102831 85253 round_trippers.go:469] Request Headers:
I0919 16:55:36.102843 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:36.102858 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:36.106043 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:36.106070 85253 round_trippers.go:577] Response Headers:
I0919 16:55:36.106081 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:36.106090 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:36.106098 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:36.106107 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:36.106114 85253 round_trippers.go:580] Content-Length: 3729
I0919 16:55:36.106126 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:36 GMT
I0919 16:55:36.106133 85253 round_trippers.go:580] Audit-Id: 77c620bc-2642-4da5-869f-56d2927a88cf
I0919 16:55:36.106248 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"509","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2705 chars]
I0919 16:55:36.106579 85253 pod_ready.go:92] pod "kube-proxy-hxjql" in "kube-system" namespace has status "Ready":"True"
I0919 16:55:36.106605 85253 pod_ready.go:81] duration metric: took 359.291297ms waiting for pod "kube-proxy-hxjql" in "kube-system" namespace to be "Ready" ...
I0919 16:55:36.106620 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6jtp" in "kube-system" namespace to be "Ready" ...
I0919 16:55:36.303051 85253 request.go:629] Waited for 196.339109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6jtp
I0919 16:55:36.303115 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6jtp
I0919 16:55:36.303120 85253 round_trippers.go:469] Request Headers:
I0919 16:55:36.303128 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:36.303134 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:36.306061 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:36.306086 85253 round_trippers.go:577] Response Headers:
I0919 16:55:36.306094 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:36.306100 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:36.306108 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:36.306116 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:36.306129 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:36 GMT
I0919 16:55:36.306141 85253 round_trippers.go:580] Audit-Id: 58bd7e8f-1052-4456-8883-18d8c69f9483
I0919 16:55:36.306456 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6jtp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1f6a8f6-f608-4f79-9fd4-1a570bde14a6","resourceVersion":"376","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5f6891df-57ac-4a88-9703-82c35d43e2eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f6891df-57ac-4a88-9703-82c35d43e2eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
I0919 16:55:36.502332 85253 request.go:629] Waited for 195.321555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:55:36.502394 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:55:36.502399 85253 round_trippers.go:469] Request Headers:
I0919 16:55:36.502406 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:36.502412 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:36.505333 85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0919 16:55:36.505356 85253 round_trippers.go:577] Response Headers:
I0919 16:55:36.505365 85253 round_trippers.go:580] Audit-Id: b7938f84-6d9b-4604-adde-0c04bd478166
I0919 16:55:36.505373 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:36.505382 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:36.505389 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:36.505397 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:36.505407 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:36 GMT
I0919 16:55:36.505499 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
I0919 16:55:36.505836 85253 pod_ready.go:92] pod "kube-proxy-r6jtp" in "kube-system" namespace has status "Ready":"True"
I0919 16:55:36.505853 85253 pod_ready.go:81] duration metric: took 399.224616ms waiting for pod "kube-proxy-r6jtp" in "kube-system" namespace to be "Ready" ...
I0919 16:55:36.505866 85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:55:36.702294 85253 request.go:629] Waited for 196.330343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-415589
I0919 16:55:36.702359 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-415589
I0919 16:55:36.702364 85253 round_trippers.go:469] Request Headers:
I0919 16:55:36.702373 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:36.702400 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:36.705509 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:36.705533 85253 round_trippers.go:577] Response Headers:
I0919 16:55:36.705544 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:36 GMT
I0919 16:55:36.705553 85253 round_trippers.go:580] Audit-Id: e386c998-e4de-4a1a-8788-153743d96eb5
I0919 16:55:36.705561 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:36.705569 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:36.705581 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:36.705592 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:36.706362 85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-415589","namespace":"kube-system","uid":"6f43b8d1-3b77-4df6-8b66-7d08cf7c0682","resourceVersion":"362","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8d76d9bf6a9e2f131bdda3e4a41d04bb","kubernetes.io/config.mirror":"8d76d9bf6a9e2f131bdda3e4a41d04bb","kubernetes.io/config.seen":"2023-09-19T16:54:11.230737337Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
I0919 16:55:36.902267 85253 request.go:629] Waited for 194.938605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:55:36.902326 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
I0919 16:55:36.902331 85253 round_trippers.go:469] Request Headers:
I0919 16:55:36.902339 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:36.902345 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:36.905395 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:36.905421 85253 round_trippers.go:577] Response Headers:
I0919 16:55:36.905430 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:36.905437 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:36.905445 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:36.905454 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:36 GMT
I0919 16:55:36.905467 85253 round_trippers.go:580] Audit-Id: e3f02859-9ef9-4cb2-8510-41ddc2f0d479
I0919 16:55:36.905474 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:36.905932 85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
I0919 16:55:36.906275 85253 pod_ready.go:92] pod "kube-scheduler-multinode-415589" in "kube-system" namespace has status "Ready":"True"
I0919 16:55:36.906292 85253 pod_ready.go:81] duration metric: took 400.416963ms waiting for pod "kube-scheduler-multinode-415589" in "kube-system" namespace to be "Ready" ...
I0919 16:55:36.906302 85253 pod_ready.go:38] duration metric: took 1.199397384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0919 16:55:36.906322 85253 system_svc.go:44] waiting for kubelet service to be running ....
I0919 16:55:36.906379 85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0919 16:55:36.919851 85253 system_svc.go:56] duration metric: took 13.515461ms WaitForService to wait for kubelet.
I0919 16:55:36.919881 85253 kubeadm.go:581] duration metric: took 13.250094673s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0919 16:55:36.919910 85253 node_conditions.go:102] verifying NodePressure condition ...
I0919 16:55:37.102318 85253 request.go:629] Waited for 182.31861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes
I0919 16:55:37.102379 85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes
I0919 16:55:37.102395 85253 round_trippers.go:469] Request Headers:
I0919 16:55:37.102406 85253 round_trippers.go:473] Accept: application/json, */*
I0919 16:55:37.102413 85253 round_trippers.go:473] User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
I0919 16:55:37.105565 85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0919 16:55:37.105590 85253 round_trippers.go:577] Response Headers:
I0919 16:55:37.105598 85253 round_trippers.go:580] Content-Type: application/json
I0919 16:55:37.105606 85253 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
I0919 16:55:37.105626 85253 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
I0919 16:55:37.105636 85253 round_trippers.go:580] Date: Tue, 19 Sep 2023 16:55:37 GMT
I0919 16:55:37.105645 85253 round_trippers.go:580] Audit-Id: dc355d8f-ee54-4443-8346-47a98e5197bc
I0919 16:55:37.105652 85253 round_trippers.go:580] Cache-Control: no-cache, private
I0919 16:55:37.106133 85253 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"510"},"items":[{"metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 8708 chars]
I0919 16:55:37.106637 85253 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0919 16:55:37.106661 85253 node_conditions.go:123] node cpu capacity is 2
I0919 16:55:37.106674 85253 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0919 16:55:37.106680 85253 node_conditions.go:123] node cpu capacity is 2
I0919 16:55:37.106686 85253 node_conditions.go:105] duration metric: took 186.767088ms to run NodePressure ...
I0919 16:55:37.106699 85253 start.go:228] waiting for startup goroutines ...
I0919 16:55:37.106728 85253 start.go:242] writing updated cluster config ...
I0919 16:55:37.107027 85253 ssh_runner.go:195] Run: rm -f paused
I0919 16:55:37.158485 85253 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
I0919 16:55:37.162156 85253 out.go:177] * Done! kubectl is now configured to use "multinode-415589" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Tue 2023-09-19 16:53:36 UTC, ends at Tue 2023-09-19 16:56:59 UTC. --
Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.489451148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.498672722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.498748720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.498773757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.498789870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:54:36 multinode-415589 cri-dockerd[1012]: time="2023-09-19T16:54:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a48a984b726602555fe6103a682cf7c01cbdc4cfc063e347b37e7b664cd0efd9/resolv.conf as [nameserver 192.168.122.1]"
Sep 19 16:54:37 multinode-415589 cri-dockerd[1012]: time="2023-09-19T16:54:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dc73b0c19acc1823f938bdac00e9aef48901d30a9938252e5bfa445f3b60ab4/resolv.conf as [nameserver 192.168.122.1]"
Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.117015304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.117071075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.117097252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.117108566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.233644781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.233828763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.233990405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.234051666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:55:38 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:38.380980760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 19 16:55:38 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:38.381116014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:55:38 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:38.381144449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 19 16:55:38 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:38.381156429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:55:38 multinode-415589 cri-dockerd[1012]: time="2023-09-19T16:55:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9baecebc6dd1099654601979e6cbbfaa20f3e668e516fe2af70cd5d43fe75ab4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
Sep 19 16:55:40 multinode-415589 cri-dockerd[1012]: time="2023-09-19T16:55:40Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
Sep 19 16:55:40 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:40.114754708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 19 16:55:40 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:40.114972339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 19 16:55:40 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:40.114997712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 19 16:55:40 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:40.115089196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
d7a3e4d244557 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 About a minute ago Running busybox 0 9baecebc6dd10 busybox-5bc68d56bd-rkqh6
b87361f4b0e67 ead0a4a53df89 2 minutes ago Running coredns 0 4dc73b0c19acc coredns-5dd5756b68-ctsv5
330b2b5636032 6e38f40d628db 2 minutes ago Running storage-provisioner 0 a48a984b72660 storage-provisioner
8fcfd36bfc2b0 kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 2 minutes ago Running kindnet-cni 0 84dcc3c9931a1 kindnet-w9q5z
7dafb88f7c1fc c120fed2beb84 2 minutes ago Running kube-proxy 0 fb0b0cb556e8e kube-proxy-r6jtp
1979af9a7d9b7 73deb9a3f7025 2 minutes ago Running etcd 0 2a9a021fe9dc3 etcd-multinode-415589
bfef71d52559a 7a5d9d67a13f6 2 minutes ago Running kube-scheduler 0 6a7de8b20db05 kube-scheduler-multinode-415589
ff647b080408d cdcab12b2dd16 2 minutes ago Running kube-apiserver 0 24b04414fbb49 kube-apiserver-multinode-415589
54fbef2163632 55f13c92defb1 2 minutes ago Running kube-controller-manager 0 06ff8d69d511e kube-controller-manager-multinode-415589
*
* ==> coredns [b87361f4b0e6] <==
* [INFO] 10.244.1.2:50553 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202366s
[INFO] 10.244.0.3:47362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099929s
[INFO] 10.244.0.3:54185 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001856783s
[INFO] 10.244.0.3:55758 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164664s
[INFO] 10.244.0.3:35778 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065438s
[INFO] 10.244.0.3:51650 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00129839s
[INFO] 10.244.0.3:33357 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058673s
[INFO] 10.244.0.3:50578 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092136s
[INFO] 10.244.0.3:57002 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074049s
[INFO] 10.244.1.2:47019 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181883s
[INFO] 10.244.1.2:34149 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167069s
[INFO] 10.244.1.2:47304 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102053s
[INFO] 10.244.1.2:35347 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102856s
[INFO] 10.244.0.3:39095 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086025s
[INFO] 10.244.0.3:49675 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006068s
[INFO] 10.244.0.3:44686 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037939s
[INFO] 10.244.0.3:53348 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041986s
[INFO] 10.244.1.2:46588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165902s
[INFO] 10.244.1.2:35220 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227022s
[INFO] 10.244.1.2:37672 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196002s
[INFO] 10.244.1.2:52969 - 5 "PTR IN 1.50.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000204714s
[INFO] 10.244.0.3:53988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077772s
[INFO] 10.244.0.3:40409 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000041061s
[INFO] 10.244.0.3:42980 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000039389s
[INFO] 10.244.0.3:37395 - 5 "PTR IN 1.50.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000038778s
*
* ==> describe nodes <==
* Name: multinode-415589
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-415589
kubernetes.io/os=linux
minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
minikube.k8s.io/name=multinode-415589
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_09_19T16_54_12_0700
minikube.k8s.io/version=v1.31.2
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 19 Sep 2023 16:54:07 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-415589
AcquireTime: <unset>
RenewTime: Tue, 19 Sep 2023 16:56:54 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 19 Sep 2023 16:55:43 +0000 Tue, 19 Sep 2023 16:54:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 19 Sep 2023 16:55:43 +0000 Tue, 19 Sep 2023 16:54:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 19 Sep 2023 16:55:43 +0000 Tue, 19 Sep 2023 16:54:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 19 Sep 2023 16:55:43 +0000 Tue, 19 Sep 2023 16:54:36 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.11
Hostname: multinode-415589
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 4445f8c83eec4871b37dec36f475360f
System UUID: 4445f8c8-3eec-4871-b37d-ec36f475360f
Boot ID: b0f45def-c91d-4dd8-b760-0f78f7732ba8
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.28.2
Kube-Proxy Version: v1.28.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-rkqh6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 82s
kube-system coredns-5dd5756b68-ctsv5 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 2m36s
kube-system etcd-multinode-415589 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 2m48s
kube-system kindnet-w9q5z 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 2m36s
kube-system kube-apiserver-multinode-415589 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m48s
kube-system kube-controller-manager-multinode-415589 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m50s
kube-system kube-proxy-r6jtp 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m36s
kube-system kube-scheduler-multinode-415589 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m48s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m35s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 100m (5%!)(MISSING)
memory 220Mi (10%!)(MISSING) 220Mi (10%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m35s kube-proxy
Normal Starting 2m48s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 2m48s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m48s kubelet Node multinode-415589 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m48s kubelet Node multinode-415589 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m48s kubelet Node multinode-415589 status is now: NodeHasSufficientPID
Normal RegisteredNode 2m37s node-controller Node multinode-415589 event: Registered Node multinode-415589 in Controller
Normal NodeReady 2m23s kubelet Node multinode-415589 status is now: NodeReady
Name: multinode-415589-m02
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-415589-m02
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 19 Sep 2023 16:55:23 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-415589-m02
AcquireTime: <unset>
RenewTime: Tue, 19 Sep 2023 16:56:54 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 19 Sep 2023 16:55:53 +0000 Tue, 19 Sep 2023 16:55:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 19 Sep 2023 16:55:53 +0000 Tue, 19 Sep 2023 16:55:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 19 Sep 2023 16:55:53 +0000 Tue, 19 Sep 2023 16:55:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 19 Sep 2023 16:55:53 +0000 Tue, 19 Sep 2023 16:55:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.170
Hostname: multinode-415589-m02
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: ccab75db92294aacb66f42c440b2dfdf
System UUID: ccab75db-9229-4aac-b66f-42c440b2dfdf
Boot ID: d70ba22e-70e0-4ddc-8b47-7016087dc451
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.28.2
Kube-Proxy Version: v1.28.2
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-5bc68d56bd-9qfss 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 82s
kube-system kindnet-64m2w 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 96s
kube-system kube-proxy-hxjql 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 96s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 89s kube-proxy
Normal NodeHasSufficientMemory 96s (x5 over 98s) kubelet Node multinode-415589-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 96s (x5 over 98s) kubelet Node multinode-415589-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 96s (x5 over 98s) kubelet Node multinode-415589-m02 status is now: NodeHasSufficientPID
Normal RegisteredNode 92s node-controller Node multinode-415589-m02 event: Registered Node multinode-415589-m02 in Controller
Normal NodeReady 84s kubelet Node multinode-415589-m02 status is now: NodeReady
Name: multinode-415589-m03
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=multinode-415589-m03
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 19 Sep 2023 16:56:12 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: multinode-415589-m03
AcquireTime: <unset>
RenewTime: Tue, 19 Sep 2023 16:56:32 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 19 Sep 2023 16:56:25 +0000 Tue, 19 Sep 2023 16:56:12 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 19 Sep 2023 16:56:25 +0000 Tue, 19 Sep 2023 16:56:12 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 19 Sep 2023 16:56:25 +0000 Tue, 19 Sep 2023 16:56:12 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 19 Sep 2023 16:56:25 +0000 Tue, 19 Sep 2023 16:56:25 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.50.209
Hostname: multinode-415589-m03
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: a4230c5cb77943d2a1409cdd61aeb739
System UUID: a4230c5c-b779-43d2-a140-9cdd61aeb739
Boot ID: 7f2e2251-cd63-4b54-b7d7-e5c85ad80c9a
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.28.2
Kube-Proxy Version: v1.28.2
PodCIDR: 10.244.2.0/24
PodCIDRs: 10.244.2.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-pmpvh 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 46s
kube-system kube-proxy-p8gzq 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 40s kube-proxy
Normal Starting 47s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 47s (x2 over 47s) kubelet Node multinode-415589-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 47s (x2 over 47s) kubelet Node multinode-415589-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 47s (x2 over 47s) kubelet Node multinode-415589-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 46s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 42s node-controller Node multinode-415589-m03 event: Registered Node multinode-415589-m03 in Controller
Normal NodeReady 34s kubelet Node multinode-415589-m03 status is now: NodeReady
*
* ==> dmesg <==
* [ +0.072804] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.333698] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.377013] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.141407] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +5.049597] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +5.970257] systemd-fstab-generator[547]: Ignoring "noauto" for root device
[ +0.093337] systemd-fstab-generator[558]: Ignoring "noauto" for root device
[ +1.132994] systemd-fstab-generator[736]: Ignoring "noauto" for root device
[ +0.274793] systemd-fstab-generator[774]: Ignoring "noauto" for root device
[ +0.107916] systemd-fstab-generator[785]: Ignoring "noauto" for root device
[ +0.121952] systemd-fstab-generator[798]: Ignoring "noauto" for root device
[ +1.506445] systemd-fstab-generator[957]: Ignoring "noauto" for root device
[ +0.106976] systemd-fstab-generator[968]: Ignoring "noauto" for root device
[ +0.106530] systemd-fstab-generator[979]: Ignoring "noauto" for root device
[ +0.125600] systemd-fstab-generator[990]: Ignoring "noauto" for root device
[ +0.130201] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
[ +4.466467] systemd-fstab-generator[1115]: Ignoring "noauto" for root device
[ +3.001624] kauditd_printk_skb: 53 callbacks suppressed
[Sep19 16:54] systemd-fstab-generator[1497]: Ignoring "noauto" for root device
[ +8.756901] systemd-fstab-generator[2437]: Ignoring "noauto" for root device
[ +13.802980] kauditd_printk_skb: 39 callbacks suppressed
[ +7.151151] kauditd_printk_skb: 14 callbacks suppressed
*
* ==> etcd [1979af9a7d9b] <==
* {"level":"info","ts":"2023-09-19T16:54:05.260334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b switched to configuration voters=(16493399244793407243)"}
{"level":"info","ts":"2023-09-19T16:54:05.260944Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c22b887c03da3da3","local-member-id":"e4e4533a349b670b","added-peer-id":"e4e4533a349b670b","added-peer-peer-urls":["https://192.168.50.11:2380"]}
{"level":"info","ts":"2023-09-19T16:54:06.0075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b is starting a new election at term 1"}
{"level":"info","ts":"2023-09-19T16:54:06.007832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b became pre-candidate at term 1"}
{"level":"info","ts":"2023-09-19T16:54:06.007995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b received MsgPreVoteResp from e4e4533a349b670b at term 1"}
{"level":"info","ts":"2023-09-19T16:54:06.008189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b became candidate at term 2"}
{"level":"info","ts":"2023-09-19T16:54:06.008399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b received MsgVoteResp from e4e4533a349b670b at term 2"}
{"level":"info","ts":"2023-09-19T16:54:06.008591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b became leader at term 2"}
{"level":"info","ts":"2023-09-19T16:54:06.008756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e4e4533a349b670b elected leader e4e4533a349b670b at term 2"}
{"level":"info","ts":"2023-09-19T16:54:06.010749Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e4e4533a349b670b","local-member-attributes":"{Name:multinode-415589 ClientURLs:[https://192.168.50.11:2379]}","request-path":"/0/members/e4e4533a349b670b/attributes","cluster-id":"c22b887c03da3da3","publish-timeout":"7s"}
{"level":"info","ts":"2023-09-19T16:54:06.010912Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-09-19T16:54:06.011337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-09-19T16:54:06.0122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.11:2379"}
{"level":"info","ts":"2023-09-19T16:54:06.012364Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-19T16:54:06.01275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-09-19T16:54:06.027557Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c22b887c03da3da3","local-member-id":"e4e4533a349b670b","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-19T16:54:06.027723Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-19T16:54:06.027748Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-09-19T16:54:06.012788Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-09-19T16:54:06.027764Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-09-19T16:56:13.029749Z","caller":"traceutil/trace.go:171","msg":"trace[469114868] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"217.198508ms","start":"2023-09-19T16:56:12.812527Z","end":"2023-09-19T16:56:13.029726Z","steps":["trace[469114868] 'process raft request' (duration: 173.968595ms)","trace[469114868] 'compare' (duration: 42.963963ms)"],"step_count":2}
{"level":"info","ts":"2023-09-19T16:56:13.029663Z","caller":"traceutil/trace.go:171","msg":"trace[1378110883] linearizableReadLoop","detail":"{readStateIndex:624; appliedIndex:622; }","duration":"164.828804ms","start":"2023-09-19T16:56:12.86479Z","end":"2023-09-19T16:56:13.029619Z","steps":["trace[1378110883] 'read index received' (duration: 121.714313ms)","trace[1378110883] 'applied index is now lower than readState.Index' (duration: 43.113941ms)"],"step_count":2}
{"level":"info","ts":"2023-09-19T16:56:13.030308Z","caller":"traceutil/trace.go:171","msg":"trace[564010888] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"165.315464ms","start":"2023-09-19T16:56:12.864716Z","end":"2023-09-19T16:56:13.030032Z","steps":["trace[564010888] 'process raft request' (duration: 164.861317ms)"],"step_count":1}
{"level":"warn","ts":"2023-09-19T16:56:13.030888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.027914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-415589-m03\" ","response":"range_response_count:1 size:1878"}
{"level":"info","ts":"2023-09-19T16:56:13.030986Z","caller":"traceutil/trace.go:171","msg":"trace[188424040] range","detail":"{range_begin:/registry/minions/multinode-415589-m03; range_end:; response_count:1; response_revision:586; }","duration":"166.195666ms","start":"2023-09-19T16:56:12.864778Z","end":"2023-09-19T16:56:13.030973Z","steps":["trace[188424040] 'agreement among raft nodes before linearized reading' (duration: 165.034798ms)"],"step_count":1}
*
* ==> kernel <==
* 16:56:59 up 3 min, 0 users, load average: 0.33, 0.28, 0.11
Linux multinode-415589 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kindnet [8fcfd36bfc2b] <==
* I0919 16:56:21.848932 1 main.go:223] Handling node with IPs: map[192.168.50.11:{}]
I0919 16:56:21.849121 1 main.go:227] handling current node
I0919 16:56:21.849148 1 main.go:223] Handling node with IPs: map[192.168.50.170:{}]
I0919 16:56:21.849438 1 main.go:250] Node multinode-415589-m02 has CIDR [10.244.1.0/24]
I0919 16:56:21.849781 1 main.go:223] Handling node with IPs: map[192.168.50.209:{}]
I0919 16:56:21.849883 1 main.go:250] Node multinode-415589-m03 has CIDR [10.244.2.0/24]
I0919 16:56:21.850336 1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.50.209 Flags: [] Table: 0}
I0919 16:56:31.857112 1 main.go:223] Handling node with IPs: map[192.168.50.11:{}]
I0919 16:56:31.857149 1 main.go:227] handling current node
I0919 16:56:31.857171 1 main.go:223] Handling node with IPs: map[192.168.50.170:{}]
I0919 16:56:31.857178 1 main.go:250] Node multinode-415589-m02 has CIDR [10.244.1.0/24]
I0919 16:56:31.857480 1 main.go:223] Handling node with IPs: map[192.168.50.209:{}]
I0919 16:56:31.857496 1 main.go:250] Node multinode-415589-m03 has CIDR [10.244.2.0/24]
I0919 16:56:41.872428 1 main.go:223] Handling node with IPs: map[192.168.50.11:{}]
I0919 16:56:41.872568 1 main.go:227] handling current node
I0919 16:56:41.872597 1 main.go:223] Handling node with IPs: map[192.168.50.170:{}]
I0919 16:56:41.872766 1 main.go:250] Node multinode-415589-m02 has CIDR [10.244.1.0/24]
I0919 16:56:41.872995 1 main.go:223] Handling node with IPs: map[192.168.50.209:{}]
I0919 16:56:41.873180 1 main.go:250] Node multinode-415589-m03 has CIDR [10.244.2.0/24]
I0919 16:56:51.880337 1 main.go:223] Handling node with IPs: map[192.168.50.11:{}]
I0919 16:56:51.880394 1 main.go:227] handling current node
I0919 16:56:51.880421 1 main.go:223] Handling node with IPs: map[192.168.50.170:{}]
I0919 16:56:51.880429 1 main.go:250] Node multinode-415589-m02 has CIDR [10.244.1.0/24]
I0919 16:56:51.880598 1 main.go:223] Handling node with IPs: map[192.168.50.209:{}]
I0919 16:56:51.880643 1 main.go:250] Node multinode-415589-m03 has CIDR [10.244.2.0/24]
*
* ==> kube-apiserver [ff647b080408] <==
* I0919 16:54:07.640057 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0919 16:54:07.643846 1 shared_informer.go:318] Caches are synced for configmaps
I0919 16:54:07.644018 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0919 16:54:07.644676 1 apf_controller.go:377] Running API Priority and Fairness config worker
I0919 16:54:07.644714 1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
I0919 16:54:07.813553 1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
I0919 16:54:08.435467 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I0919 16:54:08.442968 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I0919 16:54:08.443011 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0919 16:54:09.105790 1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0919 16:54:09.181332 1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0919 16:54:09.264406 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
W0919 16:54:09.272064 1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.50.11]
I0919 16:54:09.273055 1 controller.go:624] quota admission added evaluator for: endpoints
I0919 16:54:09.277463 1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0919 16:54:09.483619 1 controller.go:624] quota admission added evaluator for: serviceaccounts
I0919 16:54:11.054610 1 controller.go:624] quota admission added evaluator for: deployments.apps
I0919 16:54:11.073713 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
I0919 16:54:11.085035 1 controller.go:624] quota admission added evaluator for: daemonsets.apps
I0919 16:54:23.200424 1 controller.go:624] quota admission added evaluator for: replicasets.apps
I0919 16:54:23.247725 1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
E0919 16:55:22.387793 1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
E0919 16:55:22.387864 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E0919 16:55:22.389702 1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
E0919 16:55:22.390974 1 timeout.go:142] post-timeout activity - time-elapsed: 3.419126ms, GET "/api/v1/services" result: <nil>
*
* ==> kube-controller-manager [54fbef216363] <==
* I0919 16:55:23.046955 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hxjql"
I0919 16:55:23.055072 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-415589-m02" podCIDRs=["10.244.1.0/24"]
I0919 16:55:23.055128 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-64m2w"
I0919 16:55:27.362524 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-415589-m02"
I0919 16:55:27.363054 1 event.go:307] "Event occurred" object="multinode-415589-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-415589-m02 event: Registered Node multinode-415589-m02 in Controller"
I0919 16:55:35.615073 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-415589-m02"
I0919 16:55:37.895652 1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
I0919 16:55:37.920772 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-9qfss"
I0919 16:55:37.949699 1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-rkqh6"
I0919 16:55:37.962082 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.666669ms"
I0919 16:55:37.981762 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.976967ms"
I0919 16:55:37.981892 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.791µs"
I0919 16:55:38.001890 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.922µs"
I0919 16:55:40.158120 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.838252ms"
I0919 16:55:40.159282 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.998µs"
I0919 16:55:40.901205 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.861459ms"
I0919 16:55:40.902558 1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.053µs"
I0919 16:56:13.034837 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-415589-m02"
I0919 16:56:13.036161 1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-415589-m03\" does not exist"
I0919 16:56:13.056038 1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pmpvh"
I0919 16:56:13.056495 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-p8gzq"
I0919 16:56:13.062541 1 range_allocator.go:380] "Set node PodCIDR" node="multinode-415589-m03" podCIDRs=["10.244.2.0/24"]
I0919 16:56:17.383762 1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-415589-m03"
I0919 16:56:17.383780 1 event.go:307] "Event occurred" object="multinode-415589-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-415589-m03 event: Registered Node multinode-415589-m03 in Controller"
I0919 16:56:25.234121 1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-415589-m02"
*
* ==> kube-proxy [7dafb88f7c1f] <==
* I0919 16:54:24.550977 1 server_others.go:69] "Using iptables proxy"
I0919 16:54:24.571064 1 node.go:141] Successfully retrieved node IP: 192.168.50.11
I0919 16:54:24.657576 1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
I0919 16:54:24.657633 1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0919 16:54:24.660779 1 server_others.go:152] "Using iptables Proxier"
I0919 16:54:24.661577 1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0919 16:54:24.661825 1 server.go:846] "Version info" version="v1.28.2"
I0919 16:54:24.661836 1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0919 16:54:24.663079 1 config.go:188] "Starting service config controller"
I0919 16:54:24.663770 1 shared_informer.go:311] Waiting for caches to sync for service config
I0919 16:54:24.663797 1 config.go:315] "Starting node config controller"
I0919 16:54:24.663803 1 shared_informer.go:311] Waiting for caches to sync for node config
I0919 16:54:24.664576 1 config.go:97] "Starting endpoint slice config controller"
I0919 16:54:24.664585 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0919 16:54:24.764136 1 shared_informer.go:318] Caches are synced for node config
I0919 16:54:24.764162 1 shared_informer.go:318] Caches are synced for service config
I0919 16:54:24.765331 1 shared_informer.go:318] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [bfef71d52559] <==
* W0919 16:54:08.494968 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0919 16:54:08.495024 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0919 16:54:08.543795 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0919 16:54:08.543850 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0919 16:54:08.621675 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0919 16:54:08.621742 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0919 16:54:08.721489 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0919 16:54:08.721542 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0919 16:54:08.741802 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0919 16:54:08.741860 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0919 16:54:08.775933 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0919 16:54:08.775999 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0919 16:54:08.782134 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0919 16:54:08.782192 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0919 16:54:08.791610 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0919 16:54:08.791669 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0919 16:54:08.794144 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0919 16:54:08.794210 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0919 16:54:08.806765 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0919 16:54:08.806826 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0919 16:54:08.866895 1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0919 16:54:08.866954 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0919 16:54:09.093117 1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0919 16:54:09.093194 1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0919 16:54:11.868094 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Tue 2023-09-19 16:53:36 UTC, ends at Tue 2023-09-19 16:57:00 UTC. --
Sep 19 16:54:27 multinode-415589 kubelet[2458]: I0919 16:54:27.401991 2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r6jtp" podStartSLOduration=4.401770934 podCreationTimestamp="2023-09-19 16:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:54:27.400598693 +0000 UTC m=+16.383746492" watchObservedRunningTime="2023-09-19 16:54:27.401770934 +0000 UTC m=+16.384918730"
Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.016547 2458 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.051143 2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-w9q5z" podStartSLOduration=9.602827124000001 podCreationTimestamp="2023-09-19 16:54:23 +0000 UTC" firstStartedPulling="2023-09-19 16:54:27.366686702 +0000 UTC m=+16.349834479" lastFinishedPulling="2023-09-19 16:54:30.814966539 +0000 UTC m=+19.798114315" observedRunningTime="2023-09-19 16:54:31.52920746 +0000 UTC m=+20.512355254" watchObservedRunningTime="2023-09-19 16:54:36.05110696 +0000 UTC m=+25.034254801"
Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.051589 2458 topology_manager.go:215] "Topology Admit Handler" podUID="61db80e1-b248-49b3-aab0-4b70b4b47c51" podNamespace="kube-system" podName="storage-provisioner"
Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.056853 2458 topology_manager.go:215] "Topology Admit Handler" podUID="d4fcd880-e2ad-4d44-a070-e2af114e5e38" podNamespace="kube-system" podName="coredns-5dd5756b68-ctsv5"
Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.170661 2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/61db80e1-b248-49b3-aab0-4b70b4b47c51-tmp\") pod \"storage-provisioner\" (UID: \"61db80e1-b248-49b3-aab0-4b70b4b47c51\") " pod="kube-system/storage-provisioner"
Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.170862 2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2w7\" (UniqueName: \"kubernetes.io/projected/61db80e1-b248-49b3-aab0-4b70b4b47c51-kube-api-access-zt2w7\") pod \"storage-provisioner\" (UID: \"61db80e1-b248-49b3-aab0-4b70b4b47c51\") " pod="kube-system/storage-provisioner"
Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.171051 2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4fcd880-e2ad-4d44-a070-e2af114e5e38-config-volume\") pod \"coredns-5dd5756b68-ctsv5\" (UID: \"d4fcd880-e2ad-4d44-a070-e2af114e5e38\") " pod="kube-system/coredns-5dd5756b68-ctsv5"
Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.171122 2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mbn\" (UniqueName: \"kubernetes.io/projected/d4fcd880-e2ad-4d44-a070-e2af114e5e38-kube-api-access-f2mbn\") pod \"coredns-5dd5756b68-ctsv5\" (UID: \"d4fcd880-e2ad-4d44-a070-e2af114e5e38\") " pod="kube-system/coredns-5dd5756b68-ctsv5"
Sep 19 16:54:37 multinode-415589 kubelet[2458]: I0919 16:54:37.070979 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dc73b0c19acc1823f938bdac00e9aef48901d30a9938252e5bfa445f3b60ab4"
Sep 19 16:54:37 multinode-415589 kubelet[2458]: I0919 16:54:37.251320 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48a984b726602555fe6103a682cf7c01cbdc4cfc063e347b37e7b664cd0efd9"
Sep 19 16:54:38 multinode-415589 kubelet[2458]: I0919 16:54:38.307790 2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.307750753 podCreationTimestamp="2023-09-19 16:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:54:38.28465408 +0000 UTC m=+27.267801876" watchObservedRunningTime="2023-09-19 16:54:38.307750753 +0000 UTC m=+27.290898532"
Sep 19 16:54:38 multinode-415589 kubelet[2458]: I0919 16:54:38.308501 2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ctsv5" podStartSLOduration=15.308470382 podCreationTimestamp="2023-09-19 16:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:54:38.307594127 +0000 UTC m=+27.290741924" watchObservedRunningTime="2023-09-19 16:54:38.308470382 +0000 UTC m=+27.291618178"
Sep 19 16:55:11 multinode-415589 kubelet[2458]: E0919 16:55:11.576449 2458 iptables.go:575] "Could not set up iptables canary" err=<
Sep 19 16:55:11 multinode-415589 kubelet[2458]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Sep 19 16:55:11 multinode-415589 kubelet[2458]: Perhaps ip6tables or your kernel needs to be upgraded.
Sep 19 16:55:11 multinode-415589 kubelet[2458]: > table="nat" chain="KUBE-KUBELET-CANARY"
Sep 19 16:55:37 multinode-415589 kubelet[2458]: I0919 16:55:37.962060 2458 topology_manager.go:215] "Topology Admit Handler" podUID="f7b2cebb-4d8b-43fd-9f27-3f5f0b434f77" podNamespace="default" podName="busybox-5bc68d56bd-rkqh6"
Sep 19 16:55:38 multinode-415589 kubelet[2458]: I0919 16:55:38.068470 2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x75b\" (UniqueName: \"kubernetes.io/projected/f7b2cebb-4d8b-43fd-9f27-3f5f0b434f77-kube-api-access-6x75b\") pod \"busybox-5bc68d56bd-rkqh6\" (UID: \"f7b2cebb-4d8b-43fd-9f27-3f5f0b434f77\") " pod="default/busybox-5bc68d56bd-rkqh6"
Sep 19 16:55:38 multinode-415589 kubelet[2458]: I0919 16:55:38.836263 2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9baecebc6dd1099654601979e6cbbfaa20f3e668e516fe2af70cd5d43fe75ab4"
Sep 19 16:55:40 multinode-415589 kubelet[2458]: I0919 16:55:40.884569 2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-rkqh6" podStartSLOduration=2.751872918 podCreationTimestamp="2023-09-19 16:55:37 +0000 UTC" firstStartedPulling="2023-09-19 16:55:38.877383877 +0000 UTC m=+87.860531653" lastFinishedPulling="2023-09-19 16:55:40.009995277 +0000 UTC m=+88.993143054" observedRunningTime="2023-09-19 16:55:40.88396924 +0000 UTC m=+89.867117037" watchObservedRunningTime="2023-09-19 16:55:40.884484319 +0000 UTC m=+89.867632116"
Sep 19 16:56:11 multinode-415589 kubelet[2458]: E0919 16:56:11.580544 2458 iptables.go:575] "Could not set up iptables canary" err=<
Sep 19 16:56:11 multinode-415589 kubelet[2458]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Sep 19 16:56:11 multinode-415589 kubelet[2458]: Perhaps ip6tables or your kernel needs to be upgraded.
Sep 19 16:56:11 multinode-415589 kubelet[2458]: > table="nat" chain="KUBE-KUBELET-CANARY"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-415589 -n multinode-415589
helpers_test.go:261: (dbg) Run: kubectl --context multinode-415589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (21.67s)